Refining the ideas.

I’ve been thinking about my little scheme, and I’ve talked to a couple of people who know far more about electronics than I do. I’m a software person, essentially, and so I think in those terms.

Basically, one of the problems people have had with my idea is that every actuator requires different circuitry, because each one has different power requirements, etc. So one motor might need one set of resistors and capacitors to function correctly, but another one has a totally different set.

Which is fine, I totally get that. But that’s all part of the physical level of my architecture. It doesn’t matter how power gets to the motor, so long as it can be regulated by a low-voltage control circuit (i.e. my autonomic level).

According to my lazy Googling and some Twitter conversations, there are only really a very few ways for a low-voltage system to control a high-voltage actuator(s). Relay switches, transistors, MOSFETs, right? As long as my autonomic layer has interfaces for all of the usual methods, it doesn’t matter what’s on the other end of that interface.

For example: imagine I have a 50V electric motor in my system. It’s turned on by a reed switch, which is 5V, attached to my control board by a standard connector. If I swap out the 50V for a 75V, it doesn’t matter to my control circuit at all, because it’s just triggering that reed switch. So long as the reed can handle a 75V load, it’s all fine. It doesn’t matter how the 50V and the 75V each get power from their power supply — whether they’re just connected directly or have more sophisticated management subsystems — so long as they can be controlled by a binary EM switch or a system for varying power input.

Think of it like so:

bap

Each of those physical subsystems (the green and red boxes) can be as complex or as simple as you like, so long as they can be interfaced with my low-voltage autonomic board.

So the real project here — the actual tasks I need to accomplish, as opposed to all my blue-sky philosophical bullshit — is this:

  1. Create a standardized way of interfacing the behavioral and autonomic layers. My guess is that this will mean making a way of sending and receiving OSC messages between the two.
  2. Create a set of methods for abstracting those messages within the autonomic layer and turning them into actual hardware control signals.
  3. Design a standard for normalizing hardware interface and control. I don’t think this has to be physical, necessarily, but more like a generalized way of driving different sorts of actuators and reading digital/analog input.

I think the easiest way to begin this is to write the code for the Arduino, because it’s the cheapest and most widely used hardware prototyping hardware around. So my initial concrete goal:

  1. Write (or find) an OSC receiver for Arduino;
  2. Create a set of initial abstractions for components (brushed motors, stepper motors, servos, and analog and digital sensors) that can be mapped to OSC endpoints and methods;
  3. Create a machine-readable way of describing those components to the hardware, that can be uploaded to firmware depending upon configuration.

Once I can get a proof-of-concept working for Arduino, I’ll open-source it to anybody who wants to port it to any other control system.

So, not a heavy amount of work or anything. 😉

Listen

Posted in Essays | Leave a comment

Behavioral, autonomic, mechanical compared to Marr’s tri-level hypothesis

As I mentioned in my last post, my model for cybernetic systems bears a lot of resemblance to David Marr’s tri-level hypothesis, which he defines as computationalalgorithmic and implementational. I’ll quote from the site linked above:

The computational level is a description of what information processing problem is being solved by the system. The algorithmic level is a description of what steps are being carried out to solve the problem. The implementational level is a description of the physical characteristics of the information processing system. There is a one-to-many mapping from the computational level to the algorithmic level,and a one-to-many mapping from the algorithmic level to the implementational level. In other words, there is one computational description of a particular information processing problem, many different algorithms for solving that problem, and many different ways in which a particular algorithm can be physically implemented.

While this is conceptually similar to my idea, Marr is working in purely conceptual space here (though his model can be applied to physical systems as well). My taxonomy is closer to the way an animal works: a cognitive system, a mechanical system, and an autonomic system for carrying messages between the two. Of course, in animals (at least in humans), this is a strictly hierarchal system: the cognitive system can’t directly access the mechanical system, or else you could “think” electrical impulses directly to your muscles, for example! But in a technological system, there’s no reason you couldn’t theoretically directly bypass the autonomic layer entirely, though you wouldn’t want to very often, for the same reason you usually don’t let desktop software directly control the read/write heads on your hard drive.

I see no reason why the majority of low-level sensors and actuators can’t be abstracted and made object-oriented. For example, think of object classes in programming. You might have a class called Vehicle, with a set of methods and properties, and a subclass of Vehicle called Bicycle, with overriding methods and properties. Couldn’t you do the same thing with hardware control, starting with two classes: Sensor and Actuator? Then you could build sub-classes. A range finder, for example:

class rangeFinder extends Sensor{

public var min = 0; // the minimum value the range finder will send
public var max = 10000; // the maximum value, which is actually expressed in milliseconds of latency

public function latencyToCentimeters{

return this.latency * 0.5000 // Or whatever the equation is for converting milliseconds to distance 

    }

}

For example. Then you could declare something like this:
var rangeThingie = new rangeFinder(01);
Which would tell your software that there’s an object of class Sensor, subclass rangeFinder, at input port 01. (You wouldn’t need to specify input vs output, as that’s handled by our Sensor object code.)

So that’s the software abstraction layer…but the hardware still needs to be controlled somehow, right? That’s where your programmable autonomic firmware comes in. When you hook up your range finder, you specify the voltage and amperage that it requires, and upload those values to your firmware. (As I mentioned in the last post, this could even be handled by QR or barcodes on the sensor itself; you scan it with your computer’s webcam, and it connects to an open database, which returns machine-readable information:

[type : "range_sensor",
manufacturer: "Acme, Inc.",
specs: {
    voltage: "5",
    amps: ".5"
    min_operational_temperature: "-50",
    max_operational_temperature: "150"
}]
That would be in JSON format, obviously. So your autonomic firmware programmer receives this data and “knows” how to interface with this sensor at a mechanical level. Same with any other component: you could send the proper PWM to control a stepper motor (if I understand how stepper motors work, which is not at all certain) or know the maximum amperage you could run through a speaker, or what-have-you.

At that point, it’s simply a matter of plugging all your components into your autonomic board, giving it specs for each component (by downloading or manually entering them and then uploading that info to the firmware on it), along with any reusable functions you’ve defined (like “turnLeft” or “rotateElbow” for robots, as an example) and hooking up your cognitive or behavioral subsystem, which issues commands to the autonomic system.

How? Probably using something like the Open Sound Control protocol, which defines a very simple addressing scheme for accessing and sending values to subcomponents. So your software could do something like this:

var rangeVal = osc.retrieve("/robot1/sensors/rangeThingy/");

if(rangeVal > 0.5){ osc.transmit("/robot1/stepperMotors/leftElbow/rotate", "45"); }

Which would be translated by the autonomic layer into actual electrical signals. Of course, you could also chain together these specific commands into higher level functions within your behavioral code, or even in your firmware (provided it had enough memory onboard, which is why you might want to use something like an SD card for storing this stuff).

How would that code get from the behavioral level to the autonomic level? Doesn’t matter. I mean, it matters, but it could be any number of ways:

  1. The behavioral system is handled by a small computer like a Raspberry PI, physically on-board the device;
  2. The behavioral system is an actual programmed processor, also on the device;
  3. The behavioral system is on a very powerful computer, connected to the device by WiFi or cellular radio, or USB if distance isn’t an issue.

As long as your behavioral level is connected to your autonomic level somehow, the specifics don’t matter.

So what happens when that connection is severed? If you’re smart, you’ve built fall-back low-level behavior and uploaded it to your autonomic system’s storage. Building a drone plane? If it loses its connectivity to the complex control system on the other end of its radio connection, have it continue towards LKG (last known good) destination coordinates, relying on its on-board GPS. Or if that’s too risky (say, if you’re worried about it running into mountains), have it fly in a circle until it reestablishes connection, or have it try to land without damaging itself. Whatever. It’s up to you to figure out the specific fall-back behavior.

Roboticists are thinking “Yes, but my machine is much more efficient than this. I don’t care about standardization!” Yes, your machine might be better and more efficient. But it’s also a standalone device. Think of old synthesizers, in the pre-MIDI days; they’re hardwired, stuck doing the one thing you made them do. They can’t be easily upgraded by the end consumer, they can’t be modularized. Your Yamaha DX-7, which was super-badass when you bought it in 1985, is now a curiosity. It’s not as good as other, newer digital synths. Nobody wants it…especially when they can replicate its sounds exactly with software now!

Same thing if you’re building a welding robot (to use an example from a buddy of mine). Your welding robot has all the articulation and parts to weld, but it’s not very smart. But if it’s interoperable, connective, you don’t have to worry about building the logic on-board! Your robot is an avatar for an intelligence that exists separately of the hardware. As people figure out how to make better welding robot routines and procedures, your robot can be updated! It can be made smart! And eventually, when people have figured out better hardware, it can be repurposed to do something else…in the same way that I can use a goofy early 90s hardware synthesizer as an excellent MIDI controller for my newer, better synth software.

I realize that a lot of people who work in this side of technology don’t think that way, but that’s their problem, not mine. I want to figure out a way to make a standard, universal way of connecting hardware to software, one that focuses on simplicity and reproducibility and communication ability over efficiency. I’m repulsed by proprietary systems, and if your business model is based on building things that can’t be upgraded but only replaced — not because they have to be, but because that’s where you’ve decided your revenue stream comes from — then man, fuck you and your awful business model. Sooner or later, people are going to get sick of your shit and find another vendor…especially when there are cheaper and more flexible alternatives.

(Okay, Ellis, breathe. No need to get shouty. Low blood sugar. Go eat something.)

Listen

Posted in Essays, Uncategorized | Leave a comment

Behavioral, autonomic, mechanical: a model for building badass robots

[Update: since I started writing this, a Twitter friend helpfully pointed me at Marr’s levels of analysis, which upon quick study appears to be pretty much identical to this idea, so I’ll be framing this in his terminology at some point.]

This rides on the tail of the previous post. I’m just trying to get this sorted, so bear with me.

A cybernetic system consists of inputs, outputs, and logic to connect them via feedback — if this, then that. This is true for Web servers and 747 autopilots alike.

It occurs to me that you can broadly organize a cybernetic system into three levels of interaction: behavioral, autonomic and mechanical. So let’s look at these in reverse order, from the bottom up.

  1. Mechanical: this is the lowest level, below which it’s impossible to alter component behavior without external intervention. Think of, for example, a motor. A motor turns, in one direction or another. You can’t make it do anything else without actually going in and fucking with its physical properties. Same with a photovoltaic sensor, or a human muscle, which can only contract when sent an electric/chemical signal.
  2. Autonomic: This is the next level up, in which you can connect up inputs and outputs to perform basic logic without need for complex modeling. Imagine a robot with a touch sensor and a motor. You can program the robot to reverse the direction of the motor when the touch sensor is triggered. Or in a biological model, think of your heartbeat. It requires no thought, no interaction: it just beats. You can also think of the BIOS of a computer: it handles the simple, low-level switching of signals between a CPU, RAM, a hard drive, etc.
  3. Behavioral: This is when you hook a bunch of inputs and outputs together and create complex behavior based on their interaction. In computers, this would be the software level of things.

To give a concrete example of this, think of a Belkin WeMo switch. This is a networked-enabled power switch. It has a simple WiFi receiver in it and a relay that can turn power on and off to an electrical socket.

The mechanical level of the WeMo is the power socket switch itself. It does one thing: flip a relay. It doesn’t “know” anything else at all, doesn’t do anything else.

But the WiFi adds the autonomic level: there’s basic logic in the WeMo that when it receives a specific signal over WiFi, it flips that relay. That’s all it does (aside from the ability to connect to a WiFi network in the first place). Slightly more complex than the switch itself, but still not complex at all.

But then there’s the behavioral level of the system. Belkin makes a mobile app for your phone that lets you turn on the switch from wherever you are. In this case, the behavioral level is provided by your own brain: you can turn the light on or off based on a complex system of feedback inside your skull, which takes a various set of inputs, conditions and variables to decide “Do I want this light on or off?” It might be overcast outside, or it might be nighttime, and you might want to turn it on; it might be daylight, and you want to turn it off, or it might be dark but you’re not home and don’t want to waste electricity. Whatever.

But here’s where it gets interesting: you can use IFTTT to create a “channel” for your WeMo, which can be connected up to any other IFTTT channel, allowing for complex interaction without human intervention. For example, I have the WeMo in my living room set to turn on and off based upon Yahoo Weather’s API; it turns off when the API says the sun has risen, and turns on when it says the sun has set.

This is different than a light controlled by a photovoltaic switch, which is an example of autonomic behavior. The PV switch doesn’t “know” if the sun has gone down, or if someone is standing in front of it, casting a shadow; all it knows is that its sensor has been blocked, which turns off the light. While this is somewhat useful, it’s not nearly as useful as a system with a behavioral level.

Make sense?

Okay, so let’s get back to robots, which was what I was going on about in the last post. A robot is a cybernetic system, and so it has these three potential levels: behavioral, autonomic and mechanical. In the case of a robot, it looks like this:

1) Mechanical: motors, sensors. A Roomba, to use the example from the last post, has three motors (left wheel, right wheel, vacuum) and a set of touch sensors. All these can do is either receive or send electrical current: when a touch sensor is touched, it sends an electrical signal (or stops sending it, whatever, doesn’t matter). A motor receives current in one direction, it turns one way; send it the other direction, it turns the other way.

2) Autonomic: In our Roomba, this is the hardware logic (probably in a microprocessor) that figures out what to do with the change in current from the touch sensor, and how much current to send to each motor. For example, if the motor is a 100 amp motor and you send 1000 amps through it, you can literally melt it, so make sure it only gets 100 amps no matter what. Very straightforward.

3) Behavioral: in our Roomba, this is deceptively simple: roll around a room randomly until you’ve covered all of it, and then stop. In actuality, this requires a pretty serious amount of computation, based upon interaction with the autonomous level: a sensor has been tripped, a motor has been turned on. I don’t know the precise behavioral modeling in a Roomba, but I suspect it’s conceptually similar to something like Craig Reynold’s boids algorithm: move around until you hit a barrier, figure out where that barrier is (based upon something like number of revolutions of the motor), move away from it until you hit another one, etc.

In a Roomba — and indeed, in most robots — the autonomic and behavioral levels are hard-coded and stored within the robot itself. A Roomba can’t follow any instructions, save the ones that are hardcoded into the firmware in its processor.

Fine. But what if we thought about this in another way?

Let’s remove the Roomba’s behavioral subsystem entirely. Let’s replace it with a black box that takes wireless signals from a WiFi or cellular network; doesn’t matter which. This black box receives these signals and converts them to signals the autonomic subsystem can understand: turn this motor this fast for this long, turn that motor off. And let’s even add some simple autonomic functions: if no signals have been received for X milliseconds, switch to standby mode.

Our Roomba is suddenly much more interesting. Let’s imagine a Roomba “channel” on IFTTT. If I send a Tweet to an account I’ve set up for my Roomba, I can turn it on and off remotely. Cool, but not that cool, right?

But what if we add the following behavior: let’s make our Roomba play Marco Polo. Let’s give it a basic GPS unit, so it can tell us where it is. Then let’s give it the following instructions:

1) Here’s a set of GPS coordinates, defined by two values. Compare them to your own GPS coordinates.

2) Roll around for a minute in different directions, until you can figure out which direction decreases the difference between these two coordinate points.

3) Roll in that direction.

4) When you encounter an obstacle, try rolling away from it, generally in the direction you know will decrease the difference between your coordinate and your target coordinate. If you have to roll in another direction, fine, but keep bumping into things until you’ve found a route that decreases the difference rather than increasing it.

This is a very simple and relatively easy set of instructions to implement. And when we do so, we’ve got a Roomba that will come and find us, bouncing around by trial and error until it does so. It might take thirty seconds, it might take hours, but the Roomba will eventually find us.

Now, if we equip the Roomba with more complex sensors like a range finder or a Leap Motion, this all becomes much more efficient: the Roomba can “scan” the room and determine the quickest, least obstacle-filled path. In fact, the Roomba itself, the hardware, doesn’t have to do this at all: it can send the data from its sensors over its wireless connection to a much more complicated computer which can calculate all of this stuff for it, much faster, and issue commands to it.

But what happens if that network connection breaks down? In this case, we can give the Roomba a very simple autonomic routine to follow: if there’s no instructions coming, either stop and wait until a connection is reestablished, or resort to the initial behavior: bump around trying to reduce the difference between your own GPS coordinate and the one you’ve got stored in your memory. Once a connection is reestablished, start listening to it instead.

If this sounds dumb, well, imagine this: you’re in an unfamiliar city. You’re relying on your car’s GPS to navigate from your hotel to your meeting. When you’re halfway there, your GPS stops working (for whatever reason). You know your meeting is at 270 34th Street, and you know that you’re at 1800 57th Street. (The numbered streets in this imaginary city run east-west.) So you know you need to go east for fifteen blocks or so, and north for twenty-three blocks. So you turn left and go north on Oak Street, but Oak Street deadends at 45th Street. So you turn right onto 45th until you find Elm Street, the next north-south street, and you turn left and continue to 34th Street, where you turn right and keep going until you reach the 200 block.

Do you see where I’m going with this? You’re doing exactly what our imaginary Roomba is doing: you’re “bumping” into obstacles while reducing the difference between your Cartesian coordinate and the coordinate of your destination. The difference is that you’re not literally bumping into things (at least hopefully), but if our Roomba has sophisticated enough range finders and such, neither is it.

But this is even more interesting, because we can break your behavior down into the same three levels.

  1. Behavioral: I want to go to 270 W. 34st Street. My brain is converting this idea into a set of complex behaviors that mainly involve turning a wheel with my arms and pushing pedals with my feet. And hopefully also paying attention to the environment around me.
  2. Autonomic: I think “I need to turn left”, and my brain automatically converts this to a series of actions: rotate my arms at such an angle, move my knee up and down at a certain speed and pressure. As Julian Jaynes points out in The Origin Of Consciousness In The Breakdown Of The Bicameral Mind, these are not conscious actions. If you actually sat down and thought about every physical action you needed to do to drive a car, you couldn’t get more than a block.
  3. Mechanical: Your limbic system sends electricity to your muscles, which do things.

Your muscles don’t need to “know” where you’re going, why you’re going there, or even how to drive a car. Your higher mental functions (I need to turn left, ooh, there’s a Starbucks, I could use some coffee, shit, I’m already late though) don’t deal with applying signals to your muscles. The autonomic systems are the go-between.

But then, something happens: a dumbass in an SUV whips out in front of you. At that point, your behavioral system suspends and your autonomic system kicks in: hit the brakes! You don’t have to consciously think about it, and if you did, you’d be dead. It just sort of happens. (There are actually lots of these direct-action triggers wired into human mental systems. Flinching is another example. It is almost impossible not to flinch if something comes into your vision from the periphery unexpectedly, moving very fast.)

So let’s turn this into a brilliant architecture for robotics. (He said, modestly and not confusingly at all.)

Our architecture consists of our three levels: behavioral, autonomic and mechanical. However, because we’re building modular robots and not monolithic people, what each of these actually means can be swapped out and changed. Again, let’s look at this from the bottom up.

1) Mechanical. This can be pretty much any set of sensors and actuators: a potentiometer, a button, a touch sensor, a photovoltaic sensor. Doesn’t matter. To an electron, a motor and a speaker look exactly the same. You can actually simply imagine this as a whole bunch of Molex connectors on a circuitboard with a basic BIOS built-in. What we hook into them is kind of irrelevant, as our autonomic system will handle this.

2) Autonomic. this is a combination of hardware and updatable firmware. Think of a reprogrammable microprocessor, perhaps with a small bit of RAM or SSD storage attached to it. The hardware simply interprets signals from the behavioral level and sends them to our mechanical level; the firmware handles the specific details. So let’s imagine we’ve plugged two motors into our circuitboard and a heat sensor. We then tell the firmware how much voltage to send to the motors, and what range of voltage we expect from the heat sensor. It then normalizes these values by mapping them to a floating point number between 0 and 1. (This is just an example of how you could do this.)

So let’s say our heat sensor sends temperature in degrees Celsius, with a maximum of 200 and a minimum of -50. Our autonomic system converts that to a 0-1 range, where 0 is -50 and 200 is 1. Therefore, if the temperature is 125 degrees Celsius, it sends a value of 0.5. Make sense?

Same with the motors. If the motor’s maximum RPM is 2500 (and minimum is obviously zero), if we send a message like “rotateMotor(0.5)” to our autonomic level, it “knows” to send the amount of current that will turn the motor at 1250 RPM. (This can get a bit more complicated, but for our purposes, this is a basic example.)

The point is, the actual physical operating ranges of our components don’t matter at all; that’s easily mappable to standardized value ranges by our autonomic system.

We can program the firmware based upon the mechanical stuff we have connected, so we can swap our components out at any time. We can also create simple programmable autonomic “behaviors”, which are preprogrammed instruction sets. One might be: if the heat sensor (which we’ve mounted at the front of our robot) gets above 0.5, turn both motors counterclockwise at amount 1 until the sensor’s value goes down to .25. This means that when our robot senses temperatures at 125º C or higher, it will run away until the temperature goes down to 62.5º C. This allows us to not worry about basic things like self-preservation. We can even make this behavior slightly more complex: for example, we can use motors that can send back the amount of torque to the autonomic level. If the torque is too high, the motor stops doing what it’s doing.

We can also create simple shortcuts, like “turn left” or “go forward by 500 feet”. These shortcuts can be translated by the autonomic level into hardware specific commands. For example, if we know our motor turns at 2500 RPM and we know that 5 revolutions will move it one foot, when our autonomic system receives the command “go forward by 500 feet”, it translates that into the command “turn on for 60000 milliseconds, or one minute”, which is sent to the motor.

In other words, the autonomic level acts like our limbic system, freeing our robot’s “higher brain” from having to worry about any of the tedious hardware interfacing shit.

And again, this doesn’t have to be oriented towards robotics. We can make an autonomic level that sends electricity through a speaker at a certain frequency when a certain button is pushed, which becomes a very simple musical synthesizer. It’s all just input and output. Just current.

If we’ve done our job correctly, we can now move on to the behavioral level of our device.

3) Behavioral. The behavioral level, in hardware terms, is a black box: it can either be an onboard CPU (like a Raspberry PI, for example) or a network connection, like in our imaginary Roomba. Doesn’t matter, as far as the rest of the system is concerned, as long as it sends commands that our autonomic system can understand. These can either be higher-level (“turn left”) or lower-level (“turn on motor #3 for eight milliseconds, pause for fifteen milliseconds, then turn on motor #5 for one hundred milliseconds, or until sensor #5 trips, in which case start the whole thing over again”). The logic for our behavioral system can be anything we like, provided we have a complex enough processor onboard or in the cloud. In fact, it doesn’t have to be either/or: we can build a behavioral center with half of its behaviors onboard and half in the cloud, or any variation thereof — like our Roomba, which stumbles around blindly until it’s given commands by the cloud. It depends upon the requirements of the tasks our device is made to carry out.

With a structure like this, we can easily build a simple “brain” for a robot that can essentially be connected to damn near any set of sensors and actuators and perform an infinite number of tasks, so long as the right sensors and actuators are connected to it. Such a robot could be anything from a simple Romotive-style consumer toy to a drone tank in a war zone to a telemedical surgical robot, performing neurosurgery while controlled by a doctor miles away. It doesn’t even have to be a robot, actually: it can be a synthesizer or a video game controller or an interface for driving the drone tank or performing the neurosurgery. I cannot stress this enough: it’s all just electricity, going to and from mechanical bits.

And herein lies the difficult part, which is not technical but organizational: this relies upon software and hardware standards, two things which the engineering industry seems simply incapable of deciding upon until forced at gunpoint. There is no standard way of connecting motors to sensors, no universal format for describing an actuator’s mechanical behavior (voltage, amperage, torque, maximum speed, operational temperature range, etc.). Nor is there any standard API or language protocol that can be implemented between the behavioral and autonomic layers. There are existing analogies in hardware/software interfacing: the first two that pop into mind are the USB Human Interface Device standard and MIDI, the Musical Instrument Digital Interface protocol which allows interoperability between synthesizers. (In point of fact, a number of non-musical devices like 3D motion capture systems incorporate MIDI as their input/output system, which is a square peg banged into a round hole, but which suggests that such a standard is probably about thirty years overdue.)

Think of your computer’s mouse, or your trackpad perhaps if you’re using a laptop. There are a few different methods of building a mouse: mechanical, optical, or in the case of touchpads, capacitive. A mouse can move around, or it can be stationary (as with a trackball). And when I was a kid, a mouse required a software driver that came on a floppy disk when you bought it.

But at some point, somebody realized that the actual mechanics of any given mouse were just completely goddamn irrelevant from a software perspective, because every mouse — no matter how it works — just sends back two bits of information: X and Y position. So the people who make mice figured out a standard, in which a mouse sends that positional coordinate over USB in a standard way, which is called “class compliance”. How it converted motion into that coordinate — whether it used two rollers or a laser — was handled at the autonomic level, by the tiny chip inside the mouse.

So now, when you buy a mouse, you plug it in and it works. Any mouse manufacturer who attempted to build a mouse that wasn’t USB class compliant would very quickly go out of business. It would be pointless. There are lots of wonderful improvements in mouse design, I guess, and probably entire conventions full of engineering nerds who get together and get drunk in hotels and talk animatedly about lasers versus capacitance. But nobody else gives a shit. We’ve sorted the irritating part out.

And yet, the people who make robots are still reinventing the wheel, every single time, despite the fact that no robot is anything more than a collection of sensors and actuators, held together in ways that are really fascinating if you’re a structural engineer and completely irrelevant if you’re just trying to write software that controls robots. It’s all just motors, even if there are lots and lots of them and they’re connected in extremely intricate ways. You’re just sending and receiving current.

Imagine a standardization scheme where you, the aspiring roboticist, could purchase a set of motors and sensors and bring them home. Each one might have a QR code printed on it or an RFID attached to it; you could scan the code, and your computer would retrieve all of the pertinent information about the mechanism. You could then plug it into your autonomic interface, tell your computer which mechanism was at which port, and your computer would then prepare the firmware and dump it into the system. You could then attach a CPU or network interface to the autonomic board, and within minutes your robot would be active and alive, behaving in any fashion you liked.

Commercially-sold robots — with perhaps complex and delicate assemblies that would be difficult for you to make at home — would have pre-existing complex autonomic systems, with software that allowed you to “train” them, or purchase downloadable “personalities”, which would simply be pre-existing behavioral methods. Tinkerers could modify and customize the behavior of their robots using standard APIs, which could even have safety limits set in place so that you couldn’t accidentally short your robot or blow out its motor, unless you were sophisticated enough to bypass the API (and the autonomic system) and control the mechanical bits directly.

If robot manufacturers adopted this model, we would begin to see a true Golden Age of robotics, I think. We would begin to see emergent complexity at a far faster speed than is currently displayed, because anybody could build and train robots, and link them together, and let them not only act but interact, learn from each other, and contribute to and benefit from collective knowledge and action.

Now, if only we could convince engineers to get their shit together.

Listen

Posted in Uncategorized | Leave a comment

The world is a robot.

This afternoon, I attended an excellent talk by Ken Goldberg about “cloud robotics” — the idea of building robots that are essentially taught and controlled by the Internet “cloud”. As Ken was talking, I had a moment of pure epiphany about cloud robotics and the “Internet of things“. I realized that the underlying assumptions about how this should all work are completely wrong.

First, a bit of a summary: in the traditional model, a robot is an autonomous (or semi-autonomous) object. Its behavior is pre-programmed into it, and it’s set loose to do whatever it does: build automobile chassis, or roll around your house vacuuming until it’s gotten every bit of the floor. These sorts of robots are extremely limited, because they can only deal with whatever it is they’re programmed to do in the first place. Your Roomba is very good at vacuuming your rug, but if it encounters a Coke can on the rug, it doesn’t know what to do with it — it either ignores it or it runs away in a robot’s version of existential dread.

“Cloud” robotics refers to the idea of robotic systems in which the behavior modeling is offloaded onto the Internet. A perfect example is Google’s self-driving car, which is absolutely incapable of driving itself around the most sedate of suburban neighborhoods without a constant connection to Google’s servers, which are processing the data from the car’s sensors and comparing it to maps and predicated behavior and reporting back to the car, adjusting its actions accordingly. In this sense, the self-driving car isn’t self-driving at all. There is no direct human intervention, but in a very real sense, it’s Google’s servers that are behind the wheel.

There’s a lot of work being done in this area, to make robots smarter and in less need of human intervention. Ken talked about the idea, for example, of a robot that uploads pictures of an unfamiliar item to the cloud, which interprets the picture, deciphers what the object is, and returns instructions to the robot on how to deal with it. If algorithms break down, we can even foresee a future in which robots “call in” to humans, who “tell” the robot how best to proceed.

This is all well and good and presents a rosy future, but the fact is that at the moment this is all a fantasy. Right now, there’s no standard way for robots to communicate with the cloud, and even if there were, there’s no standard way for that communication to be translated into action. Every robot works differently, every robot design is unique; one would have to write an entire software stack to deal with each and every model of robot.

In fact, robots in 2013 are very much like musical synthesizers were, up until the late 1980s. This is a digression, but bear with me.

If I showed up on the doorstep of a forward-thinking musician in 1979 or so and asked them to define a synthesizer, they’d tell me that it was an electronic device that made sounds. Synths were boxes, with a “brain” which took signals from an input controller — usually, but not always, a piano-style keyboard — and turned them into audio signals that were sent to an output (usually a speaker or a mixing board for recording). Though the principles of synthesis were pretty much the same, every synth went about it in a different way: the “brain” of a Moog was totally different from that of a Buchla, for example, and in many cases they even handled the input from their keyboards totally differently. Everyone was, not so much reinventing the wheel, but inventing their own wheel.

It occurred to somebody in the late 1970s that it would be really useful if you could control multiple synths from the same keyboard, or even figure out a way to record a series of notes that could be “played back” on a synth live, to allow much more complicated music than could be performed by a single person with one or even two keyboards. But at the time, there was no real way to accomplish this, due to the sui generis nature of every synth.

A lot of goofy hacks and kludges were invented to solve this problem — including devices that sat atop the keyboards of different synths and physically pressed the notes, using solenoids — until a group of nerds invented something called MIDI, or Musical Instrument Digital Interface, in the early 1980s — a protocol for allowing synthesizers to communicate amongst one another that is still the de facto standard today.

The entire MIDI protocol is too complex to get into here, but the gist of it is that a MIDI-enabled device can send or receive basically three instructions: turn X note on on channel Y at Z volume; turn X note off on channel Y; and send X value on channel Y to controller Z. That, a bunch of wanky technical details aside, is basically it! And while MIDI has its very serious limitations, it’s the basis of at least 50% of the musical sounds you hear every single day — from the ravey keyboard lead in Lady Gaga’s “Bad Romance” to the Hammond organ sound on your favorite indie track to the deep beats on Kanye’s new jam.

Aside from the ability for one synth to talk to another, MIDI allowed something else entirely: the ability to separate the input device from the output device. To our musician in 1979, a “synth” was a monolithic physical object; but in the 1980s, you began to see synth “brains” without keyboards and keyboards without brains that could be connected using standard MIDI protocols (and cables). And as desktop computers became more powerful, a “synthesizer” could just as easily refer to a piece of software, controlled by an inexpensive MIDI keyboard controller that sends MIDI signals over USB, as a big box sitting on your desk. In fact, you don’t even need a human performer at all; one of my hobbies is writing little apps that algorithmically generate MIDI commands and send them to my software synths. (I’ve actually released two of the resulting tracks commercially, and they’ve sold surprisingly well.)

Ask a musician in 2013 what a “synth” is, and they’re not likely to describe a big physical box to you; they’re more likely to tell you it’s an app that runs on their laptop or their iPad.

The monolithic, in other words, has become modular.

By contrast, ask an engineer in 2013 what a “robot” is, and they’ll tell you it’s a machine that can be programmed to carry out physical tasks. A robot looks like Wall-E or Asimo: it’s a thing, a discrete physical object.

But this is both a simplification and an overcomplication. A robot can just as easily be defined as a collection of input and output devices, or, if you prefer, “sensors” and “actuators”, connected by a cybernetic controller. The sensors take in data from the world; the cybernetic controller interprets the data, and makes the output devices do things, upon which the whole cycle begins again, in a feedback loop.

For example: a Roomba is, when you get right down to it, a collection of touch sensors hooked up to three motors (one to turn each of the two wheels, and one to turn the fan that actually vacuums stuff up) via a “brain”. When a given touch sensor sends a strong enough signal (via running into a wall or a cat or an ottoman), the brain makes the wheels change direction or speed; for the most part, the vacuum fan isn’t involved in this process at all, but keeps happily chugging away until the Roomba is turned off.

The value of these sensors and actuators broadly follows Metcalfe’s Law: each by itself is essentially useless, but when connected together — along with something to sort out the data from the sensors and decide what commands to send to the actuators — they become far more valuable than the sum of their parts. They become a “robot”.

But here’s the thing: they’re still just parts. We call them a “robot” when they’re put into a chassis together, but that’s just limited imagination on our part.

Let’s try something else instead. Let’s take all those components out of that little round chassis and reconfigure them entirely. Let’s mount the touch sensors into a console and call them by a slightly different name: “buttons”. (Because that is, in fact, what they are.) Let’s put those motors into the wings of a very light aircraft, to control the flaps and ailerons that adjust the aircraft’s movement in the air. And instead of a hardware chip, let’s give them something more akin to a nervous system, that sends and receives signals — using radio, for example.

When you push the buttons, those signals are sent via radio to the motors, which — when combined together — move the airplane up and down and left to right. What you have now is a radio-controlled plane!

But let’s get more interesting. Let’s add a brain back in, but instead of that stupid simple chip, let’s do what the synth people did, and move it into software. After all, our laptop is a thousand times more powerful than the little microprocessor that used to be our Roomba’s tiny brain, right? And let’s swap out our touch sensors, our buttons, for another sensor: a GPS unit.

Now, we can use the infinite power of our laptop to take the simple signals from the GPS and translate them into simple instructions to our motors, which really can only go on and off. If the X coordinate of the GPS is too low, turn on the tail motor for two seconds (or if it’s a servomotor, by Z degrees). Once the X coordinate is right, turn it the other way.

Let’s make it more interesting. Let’s use Google Maps to get the precise GPS coordinates of an arbitrary address, and send that as a reference point for our two motors. (We’ve taken the fan motor and used it to turn the propeller on our plane, but it’s still stupid and only needs to turn on when we begin and turn off when we’re done.)

Now we can simply type a street address into our interface, sit back, and wait for our Roomba to get there. Only it’s not a Roomba anymore, is it? Is it even a robot at all? It’s the same collection of sensors and actuators (well, almost). It’s doing the same thing — taking input, processing it, and using that processed data to control output.

A “robot” is merely our convenient placeholder for an arbitrary collection of sensors and actuators. There’s a certain amount of anthropomorphism in that: a “robot” is a thing, like a “person”. But the difference is that each of the active parts of a robot — the sensors and actuators — can, in fact, be addressed and controlled individually. If that input and output is coordinated by a subtle and complex system — a “brain” — each simple input and output can become a remarkably advanced robot indeed…the same way a synthesizer becomes much more powerful and versatile and capable of producing amazing things when you stop thinking of it as a piano keyboard with a box attached to it.

But that convenient placeholder — “robot” — has become a trap. Robots in 2013 are like synths in 1975 — each one is sui generis, each manufacturer reinvents the wheel. Every model of robot has a different onboard operating system, a different way of connecting input to output, a different protocol. And yet, how many actual types of actuators even exist? Rotary motors, linear motors, solenoids, pistons…almost every actuator in every robot on Earth is based on a set of mechanical devices which were pretty well understood by the end of the Second World War. And all of them operate by the same rough principle: send X amount of current for Y amount of time. (Or send X amount of current at Y frequency for Z amount of time, if you’re talking about pulse width modulation-based components.)

Inputs? Slightly more complicated, but as we’ve seen with computer peripherals, it’s perfectly possible to standardize even the most complex of inputs, provided we’re willing to offload the processing to software. That’s why there are USB standard protocols for everything from computer mice to webcams to yes, even MIDI devices. Webcams may have different resolutions and color depths, but they’re still just sending an array of pixel data to software.

What if we stopped thinking of and designing robots as monolithic objects, but started thinking of them as useful collections of components? And designed simple protocols for sending and retrieving their sensor and actuator data to their brain or nervous system — protocols that could be standardized and given APIs and made replicable, as well as transmitting unique information about the robot when it connects? (USB-MIDI synths and controllers do this; when you connect one, it sends its model name and manufacturer to the MIDI-handling subsystem of the operating system. If you have a Mac, go to Applications->Utilities->Audio MIDI Setup and plug a cheap USB-MIDI controller in; you’ll see what I mean.)

Imagine a Bluetooth robot that, when paired with a computing device, sends an addressed list of its sensors and actuators to the client software, maybe like this:

  • 0 :: rotary motor :: "leftTread"
    1 :: rotary motor :: "rightTread"
    2 :: servomotor :: "robotArmElbow"

I’m just making that up off the top of my head, but you see what I mean. Or you could provide the developer with a list of endpoints; this is similar to the way that MIDI hardware synths come with manuals that show which controllers handle what, like “CC42: Filter Frequency”. (This lets you, the musician, know that if you assign the knob on your MIDI controller to CC42, when you turn it, it will adjust the filter frequency of your hardware synth.)

This would allow the creation of simple network protocols for interacting with sensors and actuators, in which the business logic is offloaded to the cloud or a controller device. For example, imagine this bit of pseudocode:

while(robot1/pressureSensor < 20){
        robot1/leftTread.rotate(20);
}
It doesn’t matter what the actual value range sent by robot1/pressureSensor is, in this simple example, so long as the cloud “knows” the proper range; it could be 0 to 1 or 0 to 255 or 0 to the distance from the Earth to the moon in micrometers. The same with the tread motor, or the servo, or the solenoid. It doesn’t matter any more than it matters to the HTML renderer in your browser whether you type a two word declaration or a 500 word soliloquy into your Facebook status box; the client-side takes care of all the tricky bits of displaying your text and converting it into POST data and sending the data to be processed on the server-side.

If every actuator/sensor became separately addressable, with all of the coordination between them being handled by higher-level computing devices, the whole notion of the “robot” ceases to exist. It’s just components, some of which are physically joined, some of which are not, connected by routers. A camera on a pole could provide data that tells forklift robots how to coordinate their movement; a light sensor could tell all of the automated blinds on the east side of your house to roll themselves down when the sun rises…while also telling your coffeemaker to power on and start brewing your coffee; if the Weather Channel’s API says it’s going to be cold, your car automatically turns on the window defroster before you get in and turn on the engine.

The whole world, in effect, becomes one giant robot, a billion different actuators driven by a billion different sensors, all linked up and connected by the cloud. Nor do the “actuators” or the sensors need necessarily be physical; again, we’re moving away from the idea of the robot as a device that does physical work. A robot that bangs a drum whenever you send a Tweet to a specific account is still a robot, right?

In fact, a roboticist of 2033 might think of a “robot” as a “set of behaviors that drive physical devices”, rather than as the physical devices themselves. One can even imagine different robotic “social networks”, where you can hook your devices up to a specific cloud that suits your tastes and needs. The military would have their cloud, businesses would have intranet clouds to control their industrial robots; you might connect your “hardware cloud” of sensors and actuators up to a “software cloud” that learns behaviors from your friends and family.

It’s difficult to fully imagine this scenario, of course. And what I’m describing here isn’t easy. It requires a complete rethinking of how we design and envision robots — from monolithic to modular. But this transition is something that every aspect of technology seems to go through at some point, from industrialization to communications to computation, and even, as we’ve seen, music technology. I believe it’s a necessary paradigm shift.

What we’re doing is nothing less than making the world around us come to life, to act and react according to the information we create and share. In order to truly make that happen, we need to teach our devices to share as well.

Listen

Posted in Essays | Leave a comment

On the death of netbooks.

This week, Acer and ASUS announced they’d be halting production of their netbook lines…which means, because they were the last major producers of netbooks, that the form factor is all but dead.

Ironically, I expensed a cheap netbook the day before yesterday — a Gateway, which actually means an Acer, because Acer owns Gateway now. I bought it because I need to be able to test NSFWCORP HTML on Internet Explorer 9…and even with my relatively new and fast MacBookPro, running Windows 7 or 8 in VirtualBox slows everything down so badly I can’t even switch over to edit code easily.

Hence the netbook, which is small enough to throw in my bag along with my MBP so that I can test code on the go. But I also installed Ubuntu on it, because I wanted the ability to actually use it, and I refuse to use Windows if I can help it.

My primary go-to device these days is my iPad, with a Bluetooth keyboard attached. There are days when I don’t even ever open my MacBook, because I can do 90% of what I need to do with the iPad. I’ve got Textastic installed, which allows me to do basic code editing on the fly, and iSSH, which lets me login to my server and do basic stuff.

But the keyword here is basic. Almost every tablet out there — not to mention smartphone — is designed around the act of consumption, not production. It’s very easy to surf the Web, do social networking, watch movies, etc. with your iPad…but it’s annoyingly difficult to do anything involving text editing with it. It’s actually easier for me to record and write music with my iPad than it is for me to write rich-formatted text. Apple has crippled their devices in ways that make it hard to get shit done with them.

And it is crippling, make no mistake. It’s not technically complicated to allow Pages to use standard keyboard shortcuts to italicize text, for example…but Apple has chosen not to do this. Instead, you have to reach up, tap the screen, drag to select the word or phrase you want to italicize, then tap the pop-up context menu twice to get to the I button. This is profoundly irritating if you’re writing very large chunks of text.

Textastic does a fantastic job of accommodating coders with the limitations of iOS, but it’s still really irritating to try and do anything serious with it, because of Apple’s arbitrary blocks on file system access and modifying the default keyboard behavior.

Also, a mouse. Touch is great for on-the-go, but if you’ve ever tried to use a  touchscreen simultaneously with a keyboard, you know what I mean. It’s dreadful and slow and clunky.

Which is where the netbook comes in. I really love the netbook form factor. It’s compact, but it doesn’t sacrifice physical usability for slickness or “ease of use” (and oh, the irony there). You’ve got your keyboard, your mouse, and a real, full-fledged operating system.

The only real problem with netbooks, for me, is their terrible lack of power. Using the mainstream flavor of Ubuntu 12.4 LTS on the Gateway is maddening. It takes thirty seconds for anything to open. I’m going to wipe it off and replace it with one of the stripped-down Ubuntu variants (like Kubuntu or Edubuntu) to see if that improves things, because as it stands it’s nearly unusable.

The conventional wisdom is that the netbook was killed by two suspects: tablets and the MacBook Air. I suspect that’s true, because it split the netbook’s market into two factions: people who wanted a cheap portable computing device that was larger than a smartphone, and people who wanted a small, light, full-powered laptop.

But it also leaves a gap: people who want a cheap, small, full-powered laptop. The MacBook Air is still about a thousand dollars, which is outside most people’s range for buying an inexpensive portable device. The tablets are cheap, but can’t do what a laptop does. Netbooks were a nice compromise.

I also see their demise as a worrying step on the tabletization of desktops and laptops. I have a terrible suspicion — based on OS X Mountain Lion’s added features — that Apple is trying to merge MacOS and iOS into one unified operating system, which would be absolutely horrifying. I don’t need or want and won’t accept a goddamn cell phone OS on my computer. I want MacOS’s power and flexibility. Unlike Linux, it’s usable out of the box with the major apps I need. Unlike Windows, it’s secure and stable and has UNIX underpinnings. Unlike both, it’s gorgeous and easy to use.

I hoped for a long time that Apple would launch a lower-end OS X netbook (perhaps as a reboot of the old iBook line), or at least extend the capabilities of iOS on the iPad to include more advanced features (without requiring jailbreaking). But I suppose Acer and ASUS’s announcements mean I’ll never see that. It’s a shame.

 

Listen

Posted in Short Cuts | 1 Comment

KNPR appearance — New Year’s resolutions

I was on KNPR, the local public radio station, yesterday, talking about New Year’s resolutions, along with pastor Robert Fowler and UNLV anthropology professor Alyssa Crittenden. Check it out!

Listen

Posted in Elsewhere | Leave a comment

Of all the creatures on Earth…

…humans are the only ones with the capacity to lie; or rather, to invent things which are not true. This is a fundamental property of consciousness.

Listen

Posted in Short Cuts | Leave a comment

Wishful Beginnings.

I started Zenarchery a very, very long time ago: the first version of the site went up, I believe, in 1998. Back then I had no idea about “weblog” software, so I wrote my own, which allowed me to make very simple HTML posts.

Over the last few years, the site has stagnated, as I’ve turned to other outlets for my ranting (mainly Twitter). But it occurs to me that it might be interesting to someone out there for me to resume cataloging my daily ideas, interests and links.

So here’s my New Year’s resolution: to try and write something here at least once a day, even if it’s only a single thought or link; specifically, something I haven’t posted anywhere else.

Here goes: I’m currently deeply fascinated by the idea of micropower: systems for self-generating electrical energy (usually in small amounts). This could be solar panels, windmills, or kinetically-generated power, like hand-cranked generators.

My current interest began with this:

This light can be charged by pulling down on a weighted cord, which I assume turns a dynamo or something similar to power the LED within. One pull equals a half-hour of light. Other devices can also be powered from this gravity light, like cellphones. (I’m interested in what they’re using to store the power — a normal deep-charge battery will lose its charging ability if it’s partially charged extremely often. My guess is a shallow-charge battery, like a car uses, or some sort of capacitance system. But that’s a guess.)

I’ve been looking at ways to modify car alternators to be human-powered as well; more to come on that soon.

Listen

Posted in Short Cuts | Leave a comment

How to pack

Tomorrow, I’m flying to Chicago to pick up a van that Rosalie’s aunt is very kindly giving us (we haven’t had a car in a while). I’ll be driving the van back, which is a 2100 mile drive, at least on the route I’m taking, which will take me from freezing, possibly snowing Chicago, all the way through the Ozarks, the north Texas prairie where I grew up, and the mountains and deserts of New Mexico and Arizona. So I need to pack for versatility.

I’ve traveled quite a lot in my life — less so in the past few years, but a lot more than your average American. Consequently, I’ve developed specific algorithms for what to pack.

My first rule is: black t-shirts. I always have at least 1.25x as many black t-shirts as the number of days I’ll be gone. Black t-shirts are incredibly versatile — you can wear them under a sportcoat in a pinch, they don’t advertise stains, and you can roll them up for additional space. In this case, I don’t know exactly how long it’ll take me to drive home, so I’m taking six t-shirts, just in case.

Always take at least one or two collared, button-up shirts, just in case. In case of what? Exactly.

Pants: nice jeans. One pair per day, generally, but in this case I’m driving solo, and frankly I’m only taking three or four pairs this trip. A pair of light shorts for hanging out in hotel rooms, in case you need to step out to get something from the vending machine. And they double as swim trunks, if you need ’em.

Shoes and socks: I only travel wearing Doc Martens. Not steel toes, if you’re traveling in cold places — trust my bitter experience on that. However, I also take a pair of cheap flip-flops, which I wear to the airport to speed up security. Also good for hotel rooms.

Socks: always carry lots of socks. For this trip, I have three pairs of heavy socks and four pairs of light athletic ones.

Toiletries: I just shaved my head, so I don’t need shampoo and conditioner. I always carry my old-fashioned double razor, shaving cream, toothbrush and floss and toothpaste. If you’re staying in hotels or motels, they’ll have soap.

Weapons: This may not apply to you, but I’m driving by myself 3/5ths of the way across the country, and I may sleep in rest stops. For me, it’s my giant Gerber pigsticker and my little Gerber pocket knife. Stowed in checked baggage, of course.

Electronics: I have a Keen shoulder bag that stores most everything I need: laptop, adapters, a tiny MIDI keyboard in case I get inspired to write music. I also keep a powered USB hub that can charge my phone, ClearSpot (for 4G wireless, where I can get it) and my iPad.

I’m still debating whether I need to take my actual laptop this trip. If I wasn’t driving, I would probably only take the iPad and my phone, but it’s not like the laptop takes up much more room.

Miscellaneous: When I get to Chicago, I’m stopping at an army/navy surplus store and getting the following:

  • A sleeping bag
  • Chemical hand warmers
  • A couple of MREs (Meals Ready to Eat)

I’m also bringing a paracord bracelet, a pair of touchscreen-friendly gloves, and a warm hat. Sound like overkill? Maybe…but if anything happens, I’m prepared.

Even if I get lost and don’t have a cell signal, my iPad has a compass and a GPS locator built in, so I can generally find my way. Despite my reputation as a firm urbanite, I spent my early years out in the country, and I know the basics of surviving and finding my way in the wilderness. Not that I plan to be in the wilderness, but….

For longer trips, I follow my old buddy Abe Burmeister’s travel tips. Abe spent a few years as a nomad in the last decade, and he told me how he managed to live out of his Boblbee backpack for months at a time.

Basically, he only traveled with a few items: a couple of pairs of nice pants and a couple of nice shirts and his laptop and cell phone. He didn’t stay continually on the move — he’d be in one town, more or less, for a few weeks at a time — so when he got there, he’d buy a couple of packs of cheap t-shirts and wear them while he was in town. When he left, he’d donate them to Goodwill. This allowed him to travel with a single carry-on bag, pretty much anywhere in the world, with little difficulty. It’s a useful trick.

I’m off to pack. See you when I get home.

Listen

Posted in Short Cuts | Leave a comment

A letter to my neighbor

Irene,

1) I did to my knowledge not receive the letter you mention. Perhaps you shoved it under my door and the scampering neighborhood hooligans took it, or the 80 year old Swiss woman you insist is spraying poison all over the yard and running a methamphetamine laboratory in her parlor. Or the spies you seem to believe are hiding in the bushes.

Also, for future reference, shoving a note under someone’s door is not a guarantee that they will find it, see it, and certainly not that they will read it.

2) As for the $160 vet bill you presented us with: I called the vet and she told me in no uncertain terms that there was nothing wrong with Erwin and that he certainly wasn’t poisoned or sick or “intoxicated”, as you put it. (I suppose he might have been drunk, but I didn’t ask the vet if she’d given him a breathalyzer test. He has never shown a predilection for alcohol, however, as he is a cat.)

She also told me that you claimed to be my cat sitter and that I’d authorized you to take him to the vet. Neither of these is true. So you took him to the vet, incurring this bill, without my permission and for no sane reason at all. Your claims that our other neighbor, Pia, was poisoning the animals is as utterly insane and groundless as your claims that she is some sort of meth cook, and suggests to me that you are possibly a paranoid schizophrenic.

However, we’ve paid for half of your fraudulent bill, and will pay the other half at our convenience — not because we believe we owe it to you, but because we want this matter settled. But that’s as far as it goes. Consider yourself lucky that we don’t litigate against you. I’m sure there’s some legal prohibition upon stealing your neighbor’s cat and taking him to the vet because you’re a madwoman.

3) As I’ve told you twice previously, I don’t have Erwin’s vaccination records, nor do I remember the name of the veterinarian I took him to to get them. There’s no legal requirement for me to keep these records. He is a cat, not a human child.

4) I absolutely, categorically refuse to pay for your $1000 doctor’s visit. If you hadn’t felt the need to fraudulently abduct my cat and take him to the vet without my permission, he wouldn’t have scratched you. Nor was he “intoxicated”, according to the vet who saw him. You are merely demented.

Also, my wife has warned you — in writing — that you play with our cat at your own risk.

If you feel the need to take this matter to court, feel free. I have an excellent attorney and will promptly countersue you for harassment. I suspect the court will find in my favor, as I am not a paranoid lunatic who takes pictures of my neighbors because I believe they are all conspiring against me.

5) For at least two years you have felt the need to take my cat into your home without asking my permission and feed him and generally behave in a terribly creepy fashion. To avoid a confrontation, we’ve allowed this. That ends now. I demand that you leave my cat, Erwin, alone. Your Erwin privileges are, as of this moment, revoked. Don’t feed him, don’t allow him into your house, don’t touch him. If I see you put food out for him I will throw it away. If I see you picking him up I will take him from you physically.

And, God help you, don’t harm him, or I will have you prosecuted for animal cruelty. You will go to jail.

6) If you want to pursue this matter, do so through your attorney. My wife and I wish no further contact with you, and if you harass us — or continue to behave as though our cat is your pet — I will, again, pursue the matter with the authorities.

I understand that you’ve been evicted from your apartment for basically behaving like a madwoman. I advise you to simply drop this matter and leave. Perhaps your lunacy intimidates others, but you don’t frighten me with your maniacal legal threats.

Signed, Joshua Ellis

Listen

Posted in Short Cuts | Leave a comment