In Unity say you need to detect finger touch (finger drawing) on something in the scene.
The only way to do this in modern Unity, is very simple:
The question and the answer I am going to post seems pretty much opinion based. Nevertheless I am going to answer as best as I can.
If you are trying to detect pointer events on the screen, there is nothing wrong with representing the screen with an object. In your case, you use a 3D collider to cover the entire frustum of the camera. However, there is a native way to do this in Unity, using a 2D UI object that covers the entire screen. The screen can be best represented by a 2D object. For me, this seems like a natural way to do it.
I use a generic code for this purpose:
public class Screen : MonoSingleton, IPointerClickHandler, IDragHandler, IBeginDragHandler, IEndDragHandler, IPointerDownHandler, IPointerUpHandler, IScrollHandler {
private bool holding = false;
private PointerEventData lastPointerEventData;
#region Events
public delegate void PointerEventHandler(PointerEventData data);
static public event PointerEventHandler OnPointerClick = delegate { };
static public event PointerEventHandler OnPointerDown = delegate { };
/// Dont use delta data as it will be wrong. If you are going to use delta, use OnDrag instead.
static public event PointerEventHandler OnPointerHold = delegate { };
static public event PointerEventHandler OnPointerUp = delegate { };
static public event PointerEventHandler OnBeginDrag = delegate { };
static public event PointerEventHandler OnDrag = delegate { };
static public event PointerEventHandler OnEndDrag = delegate { };
static public event PointerEventHandler OnScroll = delegate { };
#endregion
#region Interface Implementations
void IPointerClickHandler.OnPointerClick(PointerEventData e) {
lastPointerEventData = e;
OnPointerClick(e);
}
// And other interface implementations, you get the point
#endregion
void Update() {
if (holding) {
OnPointerHold(lastPointerEventData);
}
}
}
The Screen
is a singleton, because there is only one screen in the context of the game. Objects(like camera) subscribe to its pointer events, and arrange theirselves accordingly. This also keeps single-responsibility intact.
You would use this as appending it to an object that represents the so called glass (surface of the screen). If you think buttons on the UI as popping out of the screen, glass would be under them. For this, the glass has to be the first child of the Canvas
. Of course, the Canvas
has to be rendered in screen space for it to make sense.
One hack here, which doesn't make sense is to add an invisible Image
component to the glass, so it would receive events. This acts like the raycast target of the glass.
You could also use Input
(Input.touches
etc.) to implement this glass object. It would work as checking if the input changed in every Update
call. This seems like a polling-based approach to me, whereas the above one is an event-based approach.
Your question seems as if looking for a way to justify using the Input
class. IMHO, Do not make it harder for yourself. Use what works. And accept the fact that Unity is not perfect.