|Past Meeting Archive||Los Angeles ACM home page||National ACM home page||Click here for More Activities this month|
|Check out the Southern California Tech Calendar|
Regular Meeting of the
Wednesday, April 2, 2003
"Organic Computing: Life without Software"
Christoph von der Malsburg
Our life is more and more dominated by software. Moore's Law lets computing power quickly approach that of nervous systems. Consequently, we expect functionality that approaches that of natural systems in terms of flexibility, robustness, autonomy and situation awareness. Reality lags considerably behind such expectations. There rightly is talk of a software crisis.
The cost of this crisis is estimated in the NIST study at $60 Billion annually in the US. Proposals as to what to do abound, and they all point in one direction: human effort has to be directed at a more principled level, leaving more and more detail to the computer.
Living systems -- cells, organisms, brains and societies -- run without software in any concrete sense of the word. They are not deterministic and therefore cannot support the algorithmic division of labor between separate systems for the creation and the execution of algorithms. A deep cultural adaptation will be necessary to harness Life's organizing principles to solve the software crisis, in an interdisciplinary effort between computer science, neuroscience, molecular biology, physics and several other fields.
Lots of minor efforts are already under way to realize small systems based on the principles of evolution, development, goal-oriented self-organization, autonomous subsystem integration, learning, teaching and adaptation. Relevant fields are soft computing, artificial life, autonomous and social agents and several more.
In my own field of research, computer vision, the pendulum is rapidly swinging away form the algorithmic style of solving particular problems with particular algorithms towards organic methods of systems organization. Without such methods, seeing machines will never emulate animals in this ability. It will need a generation to create softwareless computing systems, but this is the only viable approach to construct the large information technological systems we expect to make in the future.
LA ACM Chapter March Meeting.
The presentation was: "Organic Computing: Life without Software"
This was a regular meeting of the Los Angeles Chapter of ACM. Our speaker was Christoph von der Malsburg.
Prof Malsburg's presentation was a "Vision for the future of Computer Science." More of an alert to a problem we will have to address shortly, a call-to-arms, but as yet, not much real substance. His vision stems from what we have come to expect and / or desire of computers: more complex functions, robustness, flexibility, adaptivity, evolvability, autonomy, user friendliness, situation awareness; in short, we want systems to become intelligent.
This leads to more and more lines of software. The bad news is that the more lines of code (i.e., the more complex the software) the greater the discrepancy between the scheduled development time and the actual development time (actual is longer than scheduled). In addition, the more complex a software project, the greater the probability that the project will be canceled. For example, FAA Air Control Software System has been in development for 10 years and is in the process of being scrapped and restarted from scratch. An NIST study in 2002 estimated that the cost of such cancellations in the US is about $60 billion annually. This is probably an underestimate.
Worldwide, computer science departments don't take this seriously. They don't teach their students anything to be prepared for the difficulty of building large systems.
So let's see how Life handles this problem.
A single living cell is probably as complex as one of today's PCs, but it is flexible, robust, autonomous, adaptive, evolvable, and situation aware. An organism is more complex than all existing software. The human brain is intelligent, conscious, and creative. And it is the source of all algorithms!! The estimated computing power of a human brain is 1015 OPS (operations per second), while today's PCs are only 109 OPS. By Moore's Law, PCs will equal the OPS of the human brain in 30 years.
Within the next 30 years we will have to come up with ways to deal with complexity. For example, Intel is worried about creating a chip that is 10 times as complex as today's chips (according to Moore's Law, only 5 years away). Testing such a chip is impossible due to the combinatory richness of such a chip. They are thinking they will have to subdivide systems into smaller-scale systems.
Compare Life: it is not digital, not deterministic, and not algorithmic.
Computing is evolving. Originally it was used for doing arithmetic, accounting, and differential equations. Now we want large systems with lots of coordinated sub processes: managing communications, perceiving the outside world and capable of autonomous action (although some dispute the desirability of the last).
We need a new computing paradigm: organisms are computers; computers should be organisms. For example, if you cut your finger, the organism does something about it and after a while your finger is healed. An organism takes up challenges and deals with situations in a goal-oriented way. That's how we would like our computers to be.
The science of understanding organization (organismic structure), doesn't exist. The disciplines that should be working on this problem (Computer Science, Molecular Biology, Neuroscience, Physics) have individual inhibitions that keep them from working on this problem. Computer Science is focused on algorithms. Biology is averse to conceptual discussions; biologists want facts only. They collect enormous amounts of factual data without discussing (in public) what these facts mean, where these facts lead. At last, recent articles in Nature are beginning to acknowledge the problem of understanding hundreds of thousands of molecular species talking to each other. Neuroscience is averse to formulating principles of organization, despite a century of studying the organization of the brain. Physics has a marvelous methodology applicable to dealing with the intricacies of organization, but as a field it doesn't do it, although there are individual physicists in biology who are doing it.
These fields have a lot to contribute to each other, but they are not talking to each other. For example, Life has solved the problem of spaghetti code. It invented structured programming 2 billion years ago. But Molecular Biologists don't know anything about it and could gain inspiration and insight from Computer Scientists. Prof. Malsburg is currently writing a grant proposal for Organic Computing at USC and has so far involved Molecular Biologists, Neuorscientists and Computer Scientists, but not yet any Physicists.
How big is the problem? Dr. Malsburg showed us a picture of the gene for sea urchin embryogenesis. Molecular Biologists have discovered that there are long stretches of DNA to control whether or not the gene will be translated. There are many binding sites for control, some of which are digital and some are analog. Humans have 30 to 40 thousand genes, each of which has hundreds of control points. It is impossible to figure out the whole thing in such detail without a model for the high level structure.
But what if there is no high level structure? It seems there must be, otherwise Life couldn't manage the problem of spaghetti code.
For at least the next decade, organic computing will be implemented in VLSI, but once fault-tolerance and self-organization are developed, they will lay the foundation for massively parallel computers and for molecular computers, i.e., computers that are not fully deterministic. The point is not to change the physical basis for computing, but to change the organization, which would be on the software level at first. Only later would a physical implementation be possible.
IBM has put up an Autonomic Computing web site (www.ibm.com /research/autonomic) to ask for proposals, government and academic programs, collaborations with companies, etc. No real substance to the site, just a statement of the problem. They advocate forgetting about faster, cheaper, smaller, and the implications of Moore's law. The big problem is complexity. The name derives from the body's ability to respond to a dangerous situation without conscious thought: the pupils contract, blood pressure rises, adrenalin levels rise, the whole body reacts to the challenge. This is all autonomic.
The current algorithmic division of labor calls for the computer to be deterministic, fast, and clue-less, while humans provide all the creative infrastructure: goals, methods, interpretation of results, world knowledge, and diagnostics. This requires very detailed communication between the human and the machine, down to the level of the bit, for debugging. This division of labor is now strained to the breaking point. The systems are too complex to understand in this detail, even if you had access to all the source code. Individuals are now reduced to trying to debug a system by trial and error. Systems are so complex that a given situation may never repeat exactly.
The new model for the algorithmic division of labor has most of the creative infrastructure transferred to the machine, so that all the human provides is setting the goals. Some of this is happening already. At some point, as the system begins to modify itself dynamically in order to attain the goals set by the human, what is happening is no longer algorithmic (deterministic). This implies that we must find a way to represent goals in the machine.
To attain the goal of electronic organisms there is a huge "mountain" to overcome. Currently machines are algorithmic; they are programmed by people and must be simple, without infrastructure, in order to be programmed by people. On the other side of the mountain, computers are electronic organisms that can grow, learn, contain infrastructure, and must be complex. But let them be. We control our children and teach them, even though their brains are highly complex.
All of the relevant methodologies that are currently being developed (neural networks, fuzzy logic, genetic algorithms, artificial life, autonomous agents, amorphous computing, belief propagation) are at the fringes of their respective fields and consequently, the topic of understanding the organization of complex systems is without an academic home. A new academic field needs to be created that cares about all of these diverse disciplines.
The programming paradigm of today is inside-out; we write programs and hope that the outside behavior will be correct. We need to transition to the outside-in paradigm: tell the computer what we want and the computer "makes it so."
What are the application domains for organic computing? Not operating systems. One possible domain is artificial vision, and that is Prof. Malburg's area of focus. This has a wide area of application, for instance, smart elevator controls: if there is noone waiting for the elevator, skip the floor; if someone is rushing toward the elevator, wait for them; etc. He has been told that if there were a real market, a system could be developed that would have a unit cost of $10 (eventually reduced to $1), but it requires a multi-billion dollar investment, which isn't going to happen unless there is a real market. This is a chicken and egg problem.
Other applications are autonomous robots (autonomous vehicles, toy robots, service robots), user interfaces, natural language understanding, and computer security. While current computer security is already being handled in the computer by algorithms (screen pops up telling you to download the latest updates), the real need is for situational security: who initiated the action; is this legal; is this a normal activity; etc. This cannot be an algorithmic activity; it must be managed by goals, and the means of meeting the goals must be determined by the machine itself dynamically.
There has been a lot of work on mapping the visual sections of the monkey brain. Vision is the result of an enormous collaboration of between subsystems: motion detection, edge detection, stereo depth detection, shape analysis from gray level data, etc. None of these subsystems has been solved as individual problems. It's now being recognized that you need dozens of these subsystems working together and negotiating a solution. This is very complex and the only solution is to come up with underlying general principles for how to organize this information. It cannot be recreated in terms of algorithms.
Prof. Malsburg then showed the results of one of his PhD students on recognizing faces. The student used 5 different subsystems: motion detection, color analysis (skin color), shape pattern (template for the head), contrast cue, predicting next position based on past position. The model developed from his own face was then applied to 25 still photographs. Initially, there were some failures, which were temporarily thrown out. The resulting subset of photos was then used to "train" the system, and the initial failures were retested with some success. This final "trained" system was then used to analyze photos from Kodak. He showed a number of examples in which the model succeeded in identifying heads. There were a number of false negatives (heads that were missed). He said there were also false positives (heads identified which were, in fact, not heads) but he had no examples. He explained that the several subsystems cooperated. For example, skin color depends on lighting, and the color analysis subsystem will adapt itself if input from the other modules indicates that the real location of the head is other than where the subsystem by itself would choose.
Lastly, he talked about the ability to learn from single events. The process is not currently understood and has not yet been reproduced in machines. The main unsolved problem is to know how the brain picks out significant patterns from the environment. In babies, the one cue that is important to them is motion with respect to a static background.
Prof. Malsburg's goal is to achieve an artificial vision system that can be taught by lay people (not by programming, but by showing and telling) to recognize objects and situations and conditions in the environment; a cottage industry in which users teach a visual system to learn from the environment. Humans take about 10 to 13 years to acquire a full complement of vision experience, so there is a big job to be done.
The presentation was followed by a lively question and answer session.
Among the clarifications that arose during this Q&A was that, in the end, we will still use algorithms to program the computer, but they must be more and more fundamental, and able to adapt in terms of the input data. These fundamental algorithms must then work together to achieve the goals set by humans.
This was the eighth meeting of the LA Chapter year and was attended by about 20 persons.
And coming on Wednesday, May 7 . . . Join us for an exciting talk about Patents and Software by Ariel Rogson, an Attorney specializing in Patent Law. Find out what it takes to get a software patent.
The Los Angeles Chapter normally meets the first Wednesday of each month at the Ramada Hotel, 6333 Bristol Parkway, Culver City. The program begins at 8 PM. From the San Diego Freeway (405) take the Sepulveda/Centinela exit southbound or the Slauson/Sepulveda exit northbound.
5:15 p.m. Business Meeting
6:30 p.m. Cocktails/Social
7:00 p.m. Dinner
The menu choices are listed in the table above.
Avoid a $3 surcharge!!
Reservations must be made by the Sunday preceding the meeting to avoid the surcharge.
Make your reservationsearly.
8:00 p.m. Presentation
To make a reservation, call or e-mail John Halbur, (310) 333-5635, and indicate your choice of entree, by Sunday before the dinner meeting.
There is no charge or reservation required to attend the presentation at 8:00 p.m.. Parking is FREE!
For membership information, contact Mike Walsh, (818)785-5056 or follow this link.
Other Affiliated groups
Return to "More"
Please visit our website for meeting dates, and news of upcoming events.
For further details contact the SIGPHONE at (310) 288-1148 or at Los_Angeles_Chapter@siggraph.org, or www.siggraph.org/chapters/los_angeles
Return to "More"
|Past Meeting Archive||Los Angeles ACM home page||National ACM home page||Top|
Last revision: 2003 0430 [Webmaster]