Archives
Sep 1999
Oct 1999
Nov 1999
Dec 1999
Jan 2000
Feb 2000
Mar 2000
Apr 2000
May 2000
Jun 2000
Jul 2000
Aug 2000
Sep 2000
Oct 2000
Nov 2000
Dec 2000
Jan 2001
Feb 2001
Mar 2001
Apr 2001
May 2001
Jun 2001
Jul 2001
Aug 2001
Sep 2001
Oct 2001
Nov 2001
Dec 2001
Jan 2002
Feb 2002
Mar 2002
Apr 2002
May 2002
Jun 2002
Jul 2002
Aug 2002
Sep 2002
Oct 2002
Nov 2002
Dec 2002
Jan 2003
Feb 2003
Mar 2003
Apr 2003
May 2003
Jun 2003
Jul 2003
Aug 2003
Sep 2003
Oct 2003
Nov 2003
Dec 2003
Jan 2004
Feb 2004
Mar 2004
Apr 2004
May 2004
Jun 2004
Jul 2004
Aug 2004
Sep 2004
Oct 2004
Nov 2004
Dec 2004
Jan 2005
Feb 2005
Mar 2005
Apr 2005
May 2005
Jun 2005
Jul 2005
Aug 2005
Sep 2005
Oct 2005
Nov 2005
Dec 2005
Jan 2006
Feb 2006
Mar 2006
Apr 2006
May 2006
Jun 2006
Jul 2006
Aug 2006
Sep 2006
Oct 2006
Nov 2006
Dec 2006
Jan 2007
Feb 2007
Mar 2007
Apr 2007
May 2007
Jun 2007
Jul 2007
Aug 2007
Sep 2007
Oct 2007
Nov 2007
Dec 2007
Jan 2008
Feb 2008
Mar 2008
Apr 2008
May 2008
Jun 2008
Jul 2008
Aug 2008
Sep 2008
Oct 2008
Nov 2008
Dec 2008
Jan 2009
Feb 2009
Mar 2009
Apr 2009
May 2009
Jun 2009
Jul 2009
Aug 2009
Sep 2009
Oct 2009
Nov 2009
Dec 2009
Jan 2010
Aug 2010
Sep 2010
Oct 2010
Nov 2010
Dec 2010
Feb 2011
Mar 2011
Apr 2011
May 2011
Sep 2011
Oct 2011
Nov 2011
Feb 2012
Mar 2012
May 2012
Apr 2023
May 2023
Jun 2023
Jul 2023
Sep 2023
Oct 2023

Jan
16
2000
1/16/00 User interfacing

As I sit here, typing this, I'm on my bed, in nothing but my boxers, with my laptop & power supply across my lap (I forgot to recharge it). Typing is a bit ungainly; reasonably, the keyboard should be a little bit higher than lap-level, and I can't hold my knees up high enough to put it at a proper angle to acommodate the way I hold my hands. When it comes to interfaces, though, it's acommodating me quite well. I'm using a DOS word processor, typing in normal language, using the keyboard.

Computers may be able to think fast, but they sure aren't designed to interact with the real world. In a way, if computers worked seamlessly with the real world, their usefullness would be diminished. I've said it before, I'll say it again, the computer's greatest asset is that the things created within it operate outside of our normal laws of space, time, and relativity (within mathematical guidelines, however). That's where the benefit comes from -- a scientist can come up with velocities and accelleration and direction for an object without having to physically measure the object. All they need are the numbers and a computer, and the high-powered calculator can spit out all the numbers in a fraction of a second. That's also how "theoretical" particles like quarks, leptons, mesons, etc., are 'discovered'. You can't detect them, but someone, somewhere, used computers to calcuate their existence, without even knowing what they are or how they work. The insides of a computer operate outside of the bounds of reality as we know it. The programmer and the user work with the computer to create something entirely new.

*How* they work with the computer is another story all together. This 'wetware' real world that we all rely on for our existence doesn't have a seamless boundary with the internals of the computer. When I say internals, I don't mean the physical layer of computing. I'm talking about the spaceless, timeless void of computing power. For a computer to work with the real world, it has to slow down. It has to wait for input, pause for keystrokes, or count clicks as the mouseball rolls the tiny internal motion sensors. Anyone with a slow computer running a multitasker learns quickly is that if the computer slows due to heavy processing, you do not touch the computer under any circumstances. Forcing the computer to keep track of what you're doing and output the relative actions on a screen for you to see will bring the computer to a standstill, or at the worst, crash the system.

The computer, however, needs us to know what to do in the first place. Even virii need the hand of a human God to set it's actions into motion, a program written, an icon clicked. The virus that self-remails itself still needs a human's contact list to get itself around. A computer may be able to do unimaginable things, but it can only do the things a human tells it to do. The interface is how it's done.

A crude analogy is to compare a computer interface with teaching gorrillas sign language. The trainer and the ape can communicate, but both sides require a foreign communications medium to get the information across, and neither side are completely working on the same wavelength. The sign language slows down the human's ability to transfer information to the ape, but the gorrilla has no other way to directly communicate in a way that's readily legible to the human. The sign language interface is a common pathway for which the information can travel both ways. By converting thoughts into hand motions, both sides can transfer those ideas to each other.

A computer doesn't exactly have ideas, though. What the human and the computer need to transfer back and forth is logic. This ranges from 3D graphics being required to pbey general laws of physics, to clicking on an icon and the corresponding application being started. A computer can think in multiple directions (or be programmed to do so), but humans like linear, cause-and-effect activities.

The basic computer interface becan as a switch. On and Off. Entire computer programs were written in on-and-off patterns, which computer users translated from their program into binary code that the computer can understand. This binary sign language got things started, even if it was rather one-sided. The computer got the benefit; it wasn't required to translate in the other direction. As computers became more powerful, the keyboard was the next big step. As strange as it sounds to teach a computer english, there wasn't really any true understanding of language. Programs were patterns of commands, which another program would translate from the human-understandable language into the computer-understandable language. As time went on, this went from being a separate task run after the programming was done,to being done on-the-fly, with self-compiling programs and compilers built into shells.

The GUI brought the first real-world interface to computers. It takes an immense amount of processing to create a visual representation of the inner workings of a computer; the computer itself could care less about a visual description of itself. So much of computing today is wrapped around making a comfortable interface for humans, that we forget how much we have to adapt to using that interface.

The GUI brought with it the mouse. For as simple as this pointing device is, it's functionality has changed little in the past 30 years. It is still a palm sized object, with a button or two (or three), which has a sensor in the bottom to measure movement in two dimentions. That movement is then transferred to the computer screen, by moving a cursor or some other object. The learning curve for a mouse is probably lower than a keyboard, but most people have some experience with typing when they sit at a computer.

For all the advances in interface hardware, the keyboard and mouse seem to be the best meeting of human and machine possible. with both, you have language, you have movement, and when the mouse & keyboard are used together, a nearly infinite number of possible actions are available. The response of the computer, however, is a different story. The GUI screen seems to be a graphical representation of a desktop, with files, folders, and items spread around it. Projects are set on top of others, and the topmost one is dedicated the most attention. Sound also plays a part in it; tasks in the background may talk to you when they are ready, or if something has occurred which you cannot detect from outside the computer ("you've got mail!"). However, there is limited space and range for the things the computer is doing. A screen is only x" across, and it exists in two dimentions. moving from window to window doesn't translate across from performing multiple tasks at your desk, although the computer probably keeps better track of them them you. The subject of scrollbars brings up the largest real-world incongruity -- to move the page up and see the bottom, you drag the scroll bar down. Neither side gets the best of all worlds when it comes to working with the other, but both ends make the best of it.

The future may hold a time when the interface parallels the holodeck from Star Trek. Humans speak to the computer in natural language, and the computer presents it's actions in three dimentions, easily interactable with a humans hands, feet, eyes, and ears, and the human user can interact with the computer within the same three dimentions. Even still, the bridge of the Starship Enterprise is laden with keyboards and monitors. It depends on who is to benefit greater from the interface -- the humans or the computer. Would you trust natural-language to drive your car? "go forward...speedup--brake--slow down....turn left.....NOW." The interface used when controlling a car is a steering wheel and two pedals. These aren't designed to make the car human-friendly; they are designed to give the human the amount of control that is required to get the car to do exactly what is expected of it. There is a point where the conrol system of a car has progressed to make it more confortable for a human to use, but it still remains tailored to the car's requirements for control. However, a skateboard is more dynamic. You may have to use unnatural motion to propel yourself, but navigation is caused by manipulating your body within three dimentions to make use of inertia, resistance, and the weight on the board to cause the skateboard to do the things it is capable of. Each is a means for locomotion, but one interface is designed for the machine, the other is designed for the human. Each has its place, and each has its benefits and drawbacks. Computer interfaces have tried this; the Mattel Power Glove was a popular toy for tinkerers. It was a basic glove, but it had simple motion-reactive sensors which translated movement into computer controlled actions. The glove still allowed for typing, but three-deimentional interface was available. Interfaces will continue to be designed for the benefit of not just the user, and not just the computer, but for the system that the two create when the interface is being used. For me, typing text, the system of myself, keyboard, and computer are the most efficient for the type of work being done. Kinetics may be better studied in a synthetic three dimentional environment, and starships may be controlled better with a system of customized keyboards. Every task requires a different system of interface, and that requires accomodations on both sides to work.

No comments at this time.


Your Name:
Email:
Webpage:
Your comment:



blog advertising is good for you
Looking For "Wookies"?