Some Interaction Design Issues on Touchscreens

Original link: https://blog.codingnow.com/2022/12/interaction.html

The games we are making now give priority to mobile phones, and the interaction is all around the touch screen. If you always use the mouse to test on the PC during development, many interaction problems are easily overlooked. Therefore, our self-developed engine has spent a lot of effort on mobile devices and can be modified immediately. It is recommended that developers modify and debug directly on the mobile phone during development. In this way, it is easier to discover problems that are difficult to detect with the mouse operation.

btw, developing directly on mobile phones has another advantage over developing on PCs, that is, you can pay attention to whether the size of screen elements (especially fonts) is appropriate at any time.

There are many similarities in interaction between mice and touch screens, but there are also significant differences.

  1. The click of the mouse is accurate to a single point, and the tap of the touch screen is actually a surface. Many frameworks directly equate the center point of the surface with the mouse event, which is wrong. When we interact, we should always be clear that the tap event of the touch screen does not have a precise screen coordinate like the mouse click event.

  2. There are multiple touch points on the touch screen at the same time, but generally there is only one mouse pointer. We don’t need to use the interaction logic of multi-touch in depth, but when using a mobile phone, one hand accidentally presses the side of the screen, and the other hand performs tap operation, which is very common.

  3. The basic operations of the mouse include moving, left-clicking, and right-clicking; while the touch screen only has messages that are dragged on the screen, there is no pointer movement event, and it is impossible to distinguish the two different click operations of the left and right buttons. It may be possible to use gestures to distinguish, such as using light touch and long press to separate two different interactive operations, but abuse may cause confusion for users.

  4. Mouse gestures belong to advanced gameplay, and are generally not used to educate novice users; but the touch screen is the other way around, and gestures must be used to complete richer interactive operations, and users have been well educated. Gestures such as two-finger zooming, sliding, and tapping do not need to re-educate users. But the introduction of advanced gestures still needs to be cautious.

  5. There are some restricted areas on the mobile phone screen, which are difficult for users to operate; the mouse basically has no restricted areas.


For a base-building (alien-like factory) game, tap to select a specific screen object, how to operate better on a mobile phone? Obviously, the interaction logic of the mouse and keyboard cannot be copied.

First of all, I tend to regard the mobile phone as a game controller, and it is better to follow the operation logic of the controller. In this way, a mouse pointer can be virtualized in the middle of the screen, and the map can be moved with a soft rocker (rubbing glass), while the pointer is fixed in the middle of the screen, and the OK button is placed in the lower right corner, which is the closest to the mouse logic.

Secondly, it is also necessary to retain the method of directly clicking on a specific location on the screen, which is the most straightforward on a mobile phone: I can click on whichever building I want to focus on. At this point, the camera should follow the focused object and move it to the center of the screen.

So, how to solve the problem of multiple objects under the click surface? The general mouse logic is accurate to a point, and the corresponding object at this point can always be found accurately. If the user finds that the selection is wrong, he can move the mouse pointer slightly to adjust. However, finger operation cannot make this adjustment: because the finger and the screen are originally in surface contact rather than point contact. Moreover, when the finger is pressed, the screen at the focus is covered, making it difficult to adjust further.

My solution is to press an area to select multiple possible objects in the scene every time the finger is selected, and then only focus on one of them. There is a global circular queue that remembers historically focused objects. Every time you click, if there are multiple candidate objects, then compare the historical selection records to select the focused object that has not been selected in the past few times. If all candidate objects are in focus, use the LRU algorithm to eliminate the oldest one.

When the user is operating, it clicks on a densely populated area and finds that the selected focus object is not what he expected. It only needs to tap multiple times in place to rotate among several nearby objects.

This article is transferred from: https://blog.codingnow.com/2022/12/interaction.html
This site is only for collection, and the copyright belongs to the original author.