I used the Darknet Framework to produce the following images from Edvard Munch’s “The Scream”, similar to what’s done with Google Deep Dream. It’s small step, but an important one. The rendering took about 30 minutes, so the next step is to get GPU acceleration working.
I have had some big life changes lately, should have a lot more time on my hands. I started an art project. I haven’t made a formal artist’s statement yet, but the gist is a computer AI that makes art similar to Google Deep Dream or Deep Style. I am going for something that can’t be dismissed by anyone as a philosophical zombie or merely an artist’s tool. I have started on the technical implementation, which will be using online, unsupervised learning. Its design is deliberately aimed at removing any doubt that the machine is participating in art and expressing itself. My approach reduces the programmer’s role as completely as possible and the machine takes on a independent life and existence of its own, growing just as we do. A lifelong friend said the following about me and my project, which I thought was nice:
“If basically anyone else I know had posted that, I would wonder about their medication balance [due to unrealistic grandiosity]. You posting it, however, and I wonder how long it will take. “
My primary goal is to wind up in a cringe compilation. I think a lot of great YouTubers got their start in cringe. Take Sky Williams for instance. I was just watching this cringe compilation long after Katie fell asleep waiting for me to fulfill her and I find cuts from this original video:
Before that moment, I had no idea who this gem of a man was. His thoughts pretty much echo the thoughts and feelings of geeks and nerds everywhere. I immediately fell in love. You have to give it to the guy because in just under 4 months of posting that video he took care of that problem.
As of the time of writing, the dude is pretty successful with a subscriber base of nearly 900K. The cringe video I found Sky on is here:
There are several other YouTube gems in that video as well. I think the fact that I like and relate to a lot of the people featured in that video is a good sign.
Anyone else listen to/watch this genre?
When mentioning the idea of starting an entertainment YouTube channel to some people, I can feel silent judgement that I will be contributing to the mindless consumption plaguing our society. Entertainment is kind of a funny thing. Take things like rap music or crude/low brow humor- people are quick to judge it as base entertainment and dismiss the cleverness and genius behind making it. I think what makes art and entertainment “base” at times is that it’s so easy to consume. It’s not the artists, entertainers or the content that are base. It’s how quick and easily entertainment is consumed- it’s like people swallowing their food whole without really tasting it. Of course, I think the best way to understand something in-depth is to try to go through the motions over making it yourself at least in your head. I get the feeling that intellectualizing every little thing in life actually makes one a bit boring and sterile. Art, entertainment and other forms are an expression of our humanity and to those who regard it with disdain I say you’re missing out on the bigger picture. At a deeper level, science and art/entertainment are intimately connected. If you want to be a truly “deep” person, you should embrace both.
Starting to think more seriously on my evolvable furby concept. To those that don’t know, the idea is to kickstarter a robot that can evolve true, life-like AI. There will be a few hundred small robots that connect every night to a server over wifi, relaying the amount of sensory input and feedback it got from the environment. The robots receiving the least amount of pain and most amout of pleasure from the environment are ranked the highest. A simple example would be that Pain can be from strong shocks picked up from an accelerometer and Pleasure can be through touch sensors or facial recognition that measures attention. Of course the actual sensors and criteria would be a bit more complex. The highest ranking robots are selected for a reproduction round which receives modifed genetic operators-a combination of particle swarm optimization and conventional genetic programming. The genes, which are located on the server, control the topology and other characteristics of the robot’s neural net- such as the activation function and the equation dictating hebbian learning. The new gene pool is then downloaded by the robots and the cycle repeats. There are serialized memory portions that are retained between cycles that contains the weights for the orginal neurons that havent changed between cycles- this way i hope the robots can retain a bit of their identity and memory of their owner and surroundings. If a firmware update hoses the robot, or makes the owner unrecognizable or whatever, the user can press a button to either receive a different set of genes from the pool, or revert the change. Of course the reversion de-ranks that set of genes in subsequent generations. I’m betting over time the robots would evolve to be something quite life-like and enjoyable for people to interact with, which is the primary selling point to the consumer. The business idea behind all of this that sells the idea to an investor would be to market a fully working model of a strong AI with ever increasing robot capabilities/sophistication. The intial neural net can be derived by evolving the robots in a simulation to give them the ability to navigate, avoid falls, find their charger, etc. most people i mention this to look at me like i am crazy. But it happens too often that i decide not to pursue an idea because of that only to see someone else pull it off. Just gotta find people that think it could work and devote some time to it.
Dude.. Finally someone who feels the same way as me about “first movers” and the myth of the first mover advantage. You don’t have to be first, just different and better.
Remember when I said I wanted to print correlational codes onto objects so that they can easily be identified and oriented by robots? Remember when you looked at me as if I’m crazy? If you look at this video of the ATLAS robot from Boston Dynamics, you can see the objects and doors have the same type of codes printed on them.
Katie sent me this story, saying “Sound familiar?”