Behavioral, autonomic, mechanical compared to Marr’s tri-level hypothesis

As I mentioned in my last post, my model for cybernetic systems bears a lot of resemblance to David Marr’s tri-level hypothesis, which he defines as computationalalgorithmic and implementational. I’ll quote from the site linked above:

The computational level is a description of what information processing problem is being solved by the system. The algorithmic level is a description of what steps are being carried out to solve the problem. The implementational level is a description of the physical characteristics of the information processing system. There is a one-to-many mapping from the computational level to the algorithmic level,and a one-to-many mapping from the algorithmic level to the implementational level. In other words, there is one computational description of a particular information processing problem, many different algorithms for solving that problem, and many different ways in which a particular algorithm can be physically implemented.

While this is conceptually similar to my idea, Marr is working in purely conceptual space here (though his model can be applied to physical systems as well). My taxonomy is closer to the way an animal works: a cognitive system, a mechanical system, and an autonomic system for carrying messages between the two. Of course, in animals (at least in humans), this is a strictly hierarchal system: the cognitive system can’t directly access the mechanical system, or else you could “think” electrical impulses directly to your muscles, for example! But in a technological system, there’s no reason you couldn’t theoretically directly bypass the autonomic layer entirely, though you wouldn’t want to very often, for the same reason you usually don’t let desktop software directly control the read/write heads on your hard drive.

I see no reason why the majority of low-level sensors and actuators can’t be abstracted and made object-oriented. For example, think of object classes in programming. You might have a class called Vehicle, with a set of methods and properties, and a subclass of Vehicle called Bicycle, with overriding methods and properties. Couldn’t you do the same thing with hardware control, starting with two classes: Sensor and Actuator? Then you could build sub-classes. A range finder, for example:

class rangeFinder extends Sensor{

public var min = 0; // the minimum value the range finder will send
public var max = 10000; // the maximum value, which is actually expressed in milliseconds of latency

public function latencyToCentimeters{

return this.latency * 0.5000 // Or whatever the equation is for converting milliseconds to distance 

    }

}

For example. Then you could declare something like this:
var rangeThingie = new rangeFinder(01);
Which would tell your software that there’s an object of class Sensor, subclass rangeFinder, at input port 01. (You wouldn’t need to specify input vs output, as that’s handled by our Sensor object code.)

So that’s the software abstraction layer…but the hardware still needs to be controlled somehow, right? That’s where your programmable autonomic firmware comes in. When you hook up your range finder, you specify the voltage and amperage that it requires, and upload those values to your firmware. (As I mentioned in the last post, this could even be handled by QR or barcodes on the sensor itself; you scan it with your computer’s webcam, and it connects to an open database, which returns machine-readable information:

[type : "range_sensor",
manufacturer: "Acme, Inc.",
specs: {
    voltage: "5",
    amps: ".5"
    min_operational_temperature: "-50",
    max_operational_temperature: "150"
}]
That would be in JSON format, obviously. So your autonomic firmware programmer receives this data and “knows” how to interface with this sensor at a mechanical level. Same with any other component: you could send the proper PWM to control a stepper motor (if I understand how stepper motors work, which is not at all certain) or know the maximum amperage you could run through a speaker, or what-have-you.

At that point, it’s simply a matter of plugging all your components into your autonomic board, giving it specs for each component (by downloading or manually entering them and then uploading that info to the firmware on it), along with any reusable functions you’ve defined (like “turnLeft” or “rotateElbow” for robots, as an example) and hooking up your cognitive or behavioral subsystem, which issues commands to the autonomic system.

How? Probably using something like the Open Sound Control protocol, which defines a very simple addressing scheme for accessing and sending values to subcomponents. So your software could do something like this:

var rangeVal = osc.retrieve("/robot1/sensors/rangeThingy/");

if(rangeVal > 0.5){ osc.transmit("/robot1/stepperMotors/leftElbow/rotate", "45"); }

Which would be translated by the autonomic layer into actual electrical signals. Of course, you could also chain together these specific commands into higher level functions within your behavioral code, or even in your firmware (provided it had enough memory onboard, which is why you might want to use something like an SD card for storing this stuff).

How would that code get from the behavioral level to the autonomic level? Doesn’t matter. I mean, it matters, but it could be any number of ways:

  1. The behavioral system is handled by a small computer like a Raspberry PI, physically on-board the device;
  2. The behavioral system is an actual programmed processor, also on the device;
  3. The behavioral system is on a very powerful computer, connected to the device by WiFi or cellular radio, or USB if distance isn’t an issue.

As long as your behavioral level is connected to your autonomic level somehow, the specifics don’t matter.

So what happens when that connection is severed? If you’re smart, you’ve built fall-back low-level behavior and uploaded it to your autonomic system’s storage. Building a drone plane? If it loses its connectivity to the complex control system on the other end of its radio connection, have it continue towards LKG (last known good) destination coordinates, relying on its on-board GPS. Or if that’s too risky (say, if you’re worried about it running into mountains), have it fly in a circle until it reestablishes connection, or have it try to land without damaging itself. Whatever. It’s up to you to figure out the specific fall-back behavior.

Roboticists are thinking “Yes, but my machine is much more efficient than this. I don’t care about standardization!” Yes, your machine might be better and more efficient. But it’s also a standalone device. Think of old synthesizers, in the pre-MIDI days; they’re hardwired, stuck doing the one thing you made them do. They can’t be easily upgraded by the end consumer, they can’t be modularized. Your Yamaha DX-7, which was super-badass when you bought it in 1985, is now a curiosity. It’s not as good as other, newer digital synths. Nobody wants it…especially when they can replicate its sounds exactly with software now!

Same thing if you’re building a welding robot (to use an example from a buddy of mine). Your welding robot has all the articulation and parts to weld, but it’s not very smart. But if it’s interoperable, connective, you don’t have to worry about building the logic on-board! Your robot is an avatar for an intelligence that exists separately of the hardware. As people figure out how to make better welding robot routines and procedures, your robot can be updated! It can be made smart! And eventually, when people have figured out better hardware, it can be repurposed to do something else…in the same way that I can use a goofy early 90s hardware synthesizer as an excellent MIDI controller for my newer, better synth software.

I realize that a lot of people who work in this side of technology don’t think that way, but that’s their problem, not mine. I want to figure out a way to make a standard, universal way of connecting hardware to software, one that focuses on simplicity and reproducibility and communication ability over efficiency. I’m repulsed by proprietary systems, and if your business model is based on building things that can’t be upgraded but only replaced — not because they have to be, but because that’s where you’ve decided your revenue stream comes from — then man, fuck you and your awful business model. Sooner or later, people are going to get sick of your shit and find another vendor…especially when there are cheaper and more flexible alternatives.

(Okay, Ellis, breathe. No need to get shouty. Low blood sugar. Go eat something.)

Leave a comment