Correlational/QR Codes for Autonomous Robots

Remember when I said I wanted to print correlational codes onto objects so that they can easily be identified and oriented by robots? Remember when you looked at me as if I’m crazy? If you look at this video of the ATLAS robot from Boston Dynamics, you can see the objects and doors have the same type of codes printed on them.

Totally vindicating.

Machine Controller Conundrum

Trying to solve a bit of conundrum with the machine controller. Right now I have the gcode interpreter separate from the actual motion controller. As a result of the way I’ve implemented it, both have to keep track of the machine position in task space and translate between the real machine position and offsets set in gcode. Makes me wonder if the interpreter really needs to know the machine position, or if I can just have the interpreter simply convey changes in coordinates and offsets.

Trying to figure out the roles each module should play so that the code can encapsulate as much as possible. So we have the gcode parser which parses input lines of g-code, the interpreter which decides what is being asked to do, and the machine controller which should ultimately carry them out.

The biggest issue is that gcode allows you to omit axis words if they haven’t changed since the last time they were specified. Right now the machine controller expects a start coordinate and an end coordinate to do a coordinated, linear interpolated move like G1. The way it’s coded, I can’t simply specified which coordinates have changed.

As I understand it, the interpreter should send out canonical commands to the machine controller to execute, so my intuition tells me the interpreter needs to keep track of the position somehow? This would certainly mean that the machine controller would have to do callbacks to update the current position after homing, offset changes, length unit changes, etc.