Elay has been walking around for sometime now, which means he touches everything.
This is an evident problem because some things are fragile. or organized, or dangerous, and we cannot let him touch and grab them (and he gets mad when we don't let him do so, but oh well...). This means following him around and supervising what he does if he wants to walk around the place.
A few weeks ago I tried to show him xbox games, to see what he would do, and let him touch the controller so he would see things moving in the screen whe he pressed buttons. He was not that interested actually, and I don't know if he did fully grasp he could make things happen through the controller. He was happier biting the controller, and then going to the big TV and slapping the screen (which then we had to go and tell him no, he could only touch it softly...even if he tried more times and we had to kept telling no).
This made me think about tablets, phones and other new devices. In lots of modern technology, they defend natural ways of doing things. And our son clearly showed that it is natural that, if you see something in a screen, you interact with it touching the screen. Consoles have also used this concept with movement recognition (Kinect, Wii), in which you use all you body to do something. It's really a natural way of interacting with equipment.
Do you know what else is natural? Dying of appendicitis. Another example? Cyanide can be found in nature. "Natural" things are not necessary "good", or "better" than artificial things. Some natural things can be improved in a huge number of ways.
Interacting with technology has been evolving through the years, and lately everything has a touchscreen, some have voice-recognition, and some even have movement recognition. Some of these are based on science-fiction work (like the famous mobile air screens in Minority Report). And they look cool, and we definitely should think if it is possible to make and create ways to make it, becasue in some applications it may be useful (like 3D modelling in the middle of the air).
However, practicality should always be in mind. Touchscreens are good for mobiles, because they open space for the screen while allowing mobiles not to have a keyboard. They are good for saving space, which is an important trait in a mobile device. Tablets have a similar principle, if you want to easily move around with them and see videos and play point-and-click games, they're good. Nevertheless, touchscreens are not precise. If you need to select text, or press a small button, touchscreens suck. Because our fingers are thick and imprecise, and the only way they can be useful to select is if the thing being selected has, at least, twice the surface than your finger, not leaving margin for error.
Touchscreen in cameras? Terrible idea. Touchscreen in PCs? Even worse. Touchscreens in gaming? Really really stupid, outside point-and-click games with big thingies to click. The aberration that represents a Windows Server Operating System with touchscreen interface (when servers don't even have touchscreens)? Let's not even waste words...
Same applies to kinect and other movement sensors. To do physical activities it is a good idea, and for some party games as well. However, any shooter where you need to move yourself in front of a camera is a really bad one where you cannot reach immersion at all.
Ideally, communicating with technology is just a translation between what our brain wants to do and what the technology needs to do. the perfect communication would be to have thought-activated things that really do what you want them to do. This is actually hard and complex, so instead we use interfaces like a mouse, or a controller. The idea is to translate our thought into a minimum of actions (like moving muscles). A touchscreen can be a good interface, but the mouse is a lot more precise and with less movements. If you want to select text, the mouse is almost perfect, because it offers a huge amount of precision and control in a flat area and using a very limited number of muscles. For example, if you want to apply the the idea "select text", you need to slightly move your hand until the cursor is on top of the text, and then press slightly with one of your fingers on the mouse button, while again moving slightly your hand and maybe your arm. The more muscles you need to use to translate ideas to the machine, the worse the interface is because the brain needs to do more work and use more parts to transmit simple ideas. Therefore, mouses to use in the air, while you move your hands, are idiotic unless you want to manipulate 3D objects. Since computer screens are usually flat, moving your hands in 3D is stupid. you will end up moving them in a 2D plane in front of the screen, and for that we already have the mouse and without needing to raise arms. To select text with a 3D mouse or system where you wave your hands in the air, you need to move all your arm until the hand is in a position that the computer recognizes as being on top of the text, then signal with the hand by moving one or more of your fingers quite a lot, and then waving your arm around. If the way to select is a touchscreen, the process is quite similar. This is quite less efficient than using the mouse on top of the table.
The same applies with games. The Xbox controller layout is almost perfect in allowing you to do plenty of actions,including moving two joysticks while pressing buttons. Changing that for a touchscreen or for a movement detector is replacing small finger movements with full body and arm movements, and removing the chance of activating different buttons or actions at the same time.
Again, I'm not saying in all situations it is stupid to use these "new" interaction technologies. I'm just saying that, for example, if you make a dancing game, by all means use a movement sensor. If you're making an RPG with a first person view, don't bother and give me a controller with 10 buttons that I don't need to look at to know what I'm pressing. Just because you have a movement sensor, don't force it on everything you do.
This is especially annoying regarding bad quality touchscreens for devices that have screens too small for fingers. For example cameras and video cameras. Lately some of them have touchscreens to interact with them, when a small joystick or 4 arrow buttons would be extremely more efficient and easy to use.
And then there is this trend to force touchscreens on normal PCs or even on servers (again, why the hell did you do this, Microsoft??), which needs its own category of stupidity, because normal stupid look like genius by comparison...
This is an evident problem because some things are fragile. or organized, or dangerous, and we cannot let him touch and grab them (and he gets mad when we don't let him do so, but oh well...). This means following him around and supervising what he does if he wants to walk around the place.
A few weeks ago I tried to show him xbox games, to see what he would do, and let him touch the controller so he would see things moving in the screen whe he pressed buttons. He was not that interested actually, and I don't know if he did fully grasp he could make things happen through the controller. He was happier biting the controller, and then going to the big TV and slapping the screen (which then we had to go and tell him no, he could only touch it softly...even if he tried more times and we had to kept telling no).
This made me think about tablets, phones and other new devices. In lots of modern technology, they defend natural ways of doing things. And our son clearly showed that it is natural that, if you see something in a screen, you interact with it touching the screen. Consoles have also used this concept with movement recognition (Kinect, Wii), in which you use all you body to do something. It's really a natural way of interacting with equipment.
Do you know what else is natural? Dying of appendicitis. Another example? Cyanide can be found in nature. "Natural" things are not necessary "good", or "better" than artificial things. Some natural things can be improved in a huge number of ways.
Interacting with technology has been evolving through the years, and lately everything has a touchscreen, some have voice-recognition, and some even have movement recognition. Some of these are based on science-fiction work (like the famous mobile air screens in Minority Report). And they look cool, and we definitely should think if it is possible to make and create ways to make it, becasue in some applications it may be useful (like 3D modelling in the middle of the air).
However, practicality should always be in mind. Touchscreens are good for mobiles, because they open space for the screen while allowing mobiles not to have a keyboard. They are good for saving space, which is an important trait in a mobile device. Tablets have a similar principle, if you want to easily move around with them and see videos and play point-and-click games, they're good. Nevertheless, touchscreens are not precise. If you need to select text, or press a small button, touchscreens suck. Because our fingers are thick and imprecise, and the only way they can be useful to select is if the thing being selected has, at least, twice the surface than your finger, not leaving margin for error.
Touchscreen in cameras? Terrible idea. Touchscreen in PCs? Even worse. Touchscreens in gaming? Really really stupid, outside point-and-click games with big thingies to click. The aberration that represents a Windows Server Operating System with touchscreen interface (when servers don't even have touchscreens)? Let's not even waste words...
Same applies to kinect and other movement sensors. To do physical activities it is a good idea, and for some party games as well. However, any shooter where you need to move yourself in front of a camera is a really bad one where you cannot reach immersion at all.
Ideally, communicating with technology is just a translation between what our brain wants to do and what the technology needs to do. the perfect communication would be to have thought-activated things that really do what you want them to do. This is actually hard and complex, so instead we use interfaces like a mouse, or a controller. The idea is to translate our thought into a minimum of actions (like moving muscles). A touchscreen can be a good interface, but the mouse is a lot more precise and with less movements. If you want to select text, the mouse is almost perfect, because it offers a huge amount of precision and control in a flat area and using a very limited number of muscles. For example, if you want to apply the the idea "select text", you need to slightly move your hand until the cursor is on top of the text, and then press slightly with one of your fingers on the mouse button, while again moving slightly your hand and maybe your arm. The more muscles you need to use to translate ideas to the machine, the worse the interface is because the brain needs to do more work and use more parts to transmit simple ideas. Therefore, mouses to use in the air, while you move your hands, are idiotic unless you want to manipulate 3D objects. Since computer screens are usually flat, moving your hands in 3D is stupid. you will end up moving them in a 2D plane in front of the screen, and for that we already have the mouse and without needing to raise arms. To select text with a 3D mouse or system where you wave your hands in the air, you need to move all your arm until the hand is in a position that the computer recognizes as being on top of the text, then signal with the hand by moving one or more of your fingers quite a lot, and then waving your arm around. If the way to select is a touchscreen, the process is quite similar. This is quite less efficient than using the mouse on top of the table.
The same applies with games. The Xbox controller layout is almost perfect in allowing you to do plenty of actions,including moving two joysticks while pressing buttons. Changing that for a touchscreen or for a movement detector is replacing small finger movements with full body and arm movements, and removing the chance of activating different buttons or actions at the same time.
Again, I'm not saying in all situations it is stupid to use these "new" interaction technologies. I'm just saying that, for example, if you make a dancing game, by all means use a movement sensor. If you're making an RPG with a first person view, don't bother and give me a controller with 10 buttons that I don't need to look at to know what I'm pressing. Just because you have a movement sensor, don't force it on everything you do.
This is especially annoying regarding bad quality touchscreens for devices that have screens too small for fingers. For example cameras and video cameras. Lately some of them have touchscreens to interact with them, when a small joystick or 4 arrow buttons would be extremely more efficient and easy to use.
And then there is this trend to force touchscreens on normal PCs or even on servers (again, why the hell did you do this, Microsoft??), which needs its own category of stupidity, because normal stupid look like genius by comparison...
No comments:
Post a Comment