Killer Robot Idea

Starting to think more seriously on my evolvable furby concept. To those that don’t know, the idea is to kickstarter a robot that can evolve true, life-like AI. There will be a few hundred small robots that connect every night to a server over wifi, relaying the amount of sensory input and feedback it got from the environment. The robots receiving the least amount of pain and most amout of pleasure from the environment are ranked the highest. A simple example would be that Pain can be from strong shocks picked up from an accelerometer and Pleasure can be through touch sensors or facial recognition that measures attention. Of course the actual sensors and criteria would be a bit more complex. The highest ranking robots are selected for a reproduction round which receives modifed genetic operators-a combination of particle swarm optimization and conventional genetic programming. The genes, which are located on the server, control the topology and other characteristics of the robot’s neural net- such as the activation function and the equation dictating hebbian learning. The new gene pool is then downloaded by the robots and the cycle repeats. There are serialized memory portions that are retained between cycles that contains the weights for the orginal neurons that havent changed between cycles- this way i hope the robots can retain a bit of their identity and memory of their owner and surroundings. If a firmware update hoses the robot, or makes the owner unrecognizable or whatever, the user can press a button to either receive a different set of genes from the pool, or revert the change. Of course the reversion de-ranks that set of genes in subsequent generations. I’m betting over time the robots would evolve to be something quite life-like and enjoyable for people to interact with, which is the primary selling point to the consumer. The business idea behind all of this that sells the idea to an investor would be to market a fully working model of a strong AI with ever increasing robot capabilities/sophistication. The intial neural net can be derived by evolving the robots in a simulation to give them the ability to navigate, avoid falls, find their charger, etc. most people i mention this to look at me like i am crazy. But it happens too often that i decide not to pursue an idea because of that only to see someone else pull it off. Just gotta find people that think it could work and devote some time to it.

Correlational/QR Codes for Autonomous Robots

Remember when I said I wanted to print correlational codes onto objects so that they can easily be identified and oriented by robots? Remember when you looked at me as if I’m crazy? If you look at this video of the ATLAS robot from Boston Dynamics, you can see the objects and doors have the same type of codes printed on them.

Totally vindicating.

Machine Controller Conundrum

Trying to solve a bit of conundrum with the machine controller. Right now I have the gcode interpreter separate from the actual motion controller. As a result of the way I’ve implemented it, both have to keep track of the machine position in task space and translate between the real machine position and offsets set in gcode. Makes me wonder if the interpreter really needs to know the machine position, or if I can just have the interpreter simply convey changes in coordinates and offsets.

Trying to figure out the roles each module should play so that the code can encapsulate as much as possible. So we have the gcode parser which parses input lines of g-code, the interpreter which decides what is being asked to do, and the machine controller which should ultimately carry them out.

The biggest issue is that gcode allows you to omit axis words if they haven’t changed since the last time they were specified. Right now the machine controller expects a start coordinate and an end coordinate to do a coordinated, linear interpolated move like G1. The way it’s coded, I can’t simply specified which coordinates have changed.

As I understand it, the interpreter should send out canonical commands to the machine controller to execute, so my intuition tells me the interpreter needs to keep track of the position somehow? This would certainly mean that the machine controller would have to do callbacks to update the current position after homing, offset changes, length unit changes, etc.

A Parent’s Perspective on Sentient Machines

Being a parent gives a whole new perspective on AI and robots. The guy uses the term “motor babbling”. The machine eventually learns to walk on its own much the same way babies learn to speak and walk. It’s kinda cool being part of the generation witnessing the closing gap between humans and machines. I think this concept is unpalatable to people (and even a few friends) because of the human ego. We used to fancy ourselves being the center of the universe and put up a lot of resistance to the idea that we orbit the sun, which is one of many stars out there. The next big shock will be the realization that machines can be truly sentient and not philosophical zombies.

Mad Max Powerwheels Build

Lexi wants a new Powerwheels car “available” only at Think Geek.

I contacted them trying to convince them to help me out with a few details of the build. Unfortunately, they didn’t retain any info to share. In case you were wondering, Lexi runs Barter  Town- so I have begun work and will be starting a build log of my own.

DLT-600 Progress

I almost have this DLT-600 purring like a kitten. Granted I had to completely replace the controller, extruder, hot end and end-effector. Fast enough for you guys though?