Machine Controller Conundrum

Trying to solve a bit of conundrum with the machine controller. Right now I have the gcode interpreter separate from the actual motion controller. As a result of the way I’ve implemented it, both have to keep track of the machine position in task space and translate between the real machine position and offsets set in gcode. Makes me wonder if the interpreter really needs to know the machine position, or if I can just have the interpreter simply convey changes in coordinates and offsets.

Trying to figure out the roles each module should play so that the code can encapsulate as much as possible. So we have the gcode parser which parses input lines of g-code, the interpreter which decides what is being asked to do, and the machine controller which should ultimately carry them out.

The biggest issue is that gcode allows you to omit axis words if they haven’t changed since the last time they were specified. Right now the machine controller expects a start coordinate and an end coordinate to do a coordinated, linear interpolated move like G1. The way it’s coded, I can’t simply specified which coordinates have changed.

As I understand it, the interpreter should send out canonical commands to the machine controller to execute, so my intuition tells me the interpreter needs to keep track of the position somehow? This would certainly mean that the machine controller would have to do callbacks to update the current position after homing, offset changes, length unit changes, etc.

DLT-600 Progress

I almost have this DLT-600 purring like a kitten. Granted I had to completely replace the controller, extruder, hot end and end-effector. Fast enough for you guys though?

My Machine Controller Philosophy

My philosophy in writing and setting up a machine controller. Keep everything physically based as possible, so that each part of the system is straightforward to design and validate at each individual system. The input to a stepper motor driver should be number of shaft rotations, for instance (and not the number of steps, which varies according to configuration, like full step/half step/microstepping, 50 pole steppers, 12 pole steppers) This way you can compare the internal variables for number of shaft rotations to the physical number of shaft rotations in the real world. The associated virtual encoder for the stepper motor should also read back in number of shaft rotations. Then, have your inverse kinematics translate from Cartesian world coordinates to number of shaft rotations for the individual steppers. Changing a stepper then, wouldn’t require a change in the kinematics and vise-versa. Many opensource machine controllers take a lot of shortcuts (usually a single unit conversion), which makes it harder to maintain the software, use a different machine configuration and even troubleshoot it.