We're just beginning to build computers that function less along the lines of rigid trains of binary bits flashing along the rigid roadways of silicon chips and more like the soup of simultaneous electrical sparks that light up our brains. This is, according to the chaps at IBM, the beginning of the age of cognitive computing, and Big Blue is concerned that our existing programming tools just aren't up to the task. So they built something that is: Corelet programming.
Instead of blindly following linear programs, IBM is certain that our next-gen computers will instead take a look at reams of data and think about it, reasoning meaning out of the mass and also from interactions with people. This form of cognitive computing is more akin to how human brains work, and will involve a radical rethink of every bit of computer tech from the silicon through storage to the human-computer interfaces. So the company's making devices that work like hardware models of the neurons, synapses, and signaling mechanisms of our brains. A whole new language is needed to make this sort of computer compute the tasks we would like it to contemplate.
While IBM's engineer John Backus created FORTRAN as a critical early programming language in what IBM says was the "programmable" era, the company says its idea for the cognitive computing future is all about the "corelet" model.
A corelet is a pre-designed code building bloc that's compatible with cognitive computing. Where a pre-designed old-fashioned piece of code of almost any language is designed, ultimately, to get a CPU to shuffle some numbers through the right kind of math, a "corelet" building block is designed to get the parts of a neurosynaptic processor core to do what you want. That is to make synthetic neurons fire, store memory in synapse-like modes, and communicate to other neural nodes. But because thinking about computing on this scale would make programming in assembly code look like child's play, each corelet in IBM's model is a black box: All that's exposed for the programmer are the external inputs and outputs.
Stick enough corelets together, IBM suggests, and like stacking individual Lego bricks into a completed model, the corelet software can achieve tasks that are much bigger than the sum of its parts. Clumps of corelets could also be created that peform certain tasks, and these could then be aggregated to create a program that can perform some really complex tasks.
IBM thinks this sort of programming may be a relatively low-effort task and could even be used by non-experts to create some sophisticated cognitive computing applications. This is, of course, part of the dream of cognitive computing because it really has the potential to change folks' everyday lives: Asking your home (cognitive) PC to calculate a more efficient route to work is likely to be much easier using tools like this than it would be to try to code that math in traditional C, FORTRAN, Python, or whatever.
Corelet programs should, in theory, be much more efficient at solving problems like real-world image detection in real time--the sort of tech that would make future Google Glass-like headsets even more useful--or for diagnosing medical conditions from a plethora of medical data gathered about a particular person.
Heading up this effort by IBM's SyNAPSE team is Dharmendra S. Modha. Depending on the success of IBM's cuture computing efforts and the adoption of Corelet thinking, Modha's name may be one that programmers remember for quite some time.
Are you excited about the idea of programming a neuron-like computer, something that's on the cusp of delivering sci-fi-like computer power? Or is this just another smart "AI" language like Lisp?
[Chalk Outlines: Gunnar Pippel via Shutterstock]