Helping with Python Code

Rob Freeman can you make it more clear what python help you need?

@Steven Reubenstone. Didn’t see this question earlier.

Originally I wanted to modify the github project in the description. Instead I managed to teach myself enough python this last week to get a prototype working which is distinct from that.

I can tell you as much as I have published online elsewhere.

Basically the idea is to work with adjacency information in the real world. I read adjacency information from the real world into a matrix. That much is roughly in common with the github project (which is based on Jeff Hawkins’ HTM.)

The rest is different.

I seek to find sub-networks in that matrix which are activated by stimuli. (Jeff Hawkins disagrees with this. He thinks there is something important happening with the layered structure of the neocortex.)

There are branches of deep learning which are similar in some ways. Mostly LSTM, or recursive neural networks. They are similar mostly for dealing with sequential information, and generalizing mixtures of observed states across those sequences.

What I am attempting differs from deep learning mostly in that I hypothesize the sub-networks interact chaotically. So the emphasis is on generating structure, not learning it.

The hypothesis is that perception comes from patterns in the generated structure. It cannot be learned. (Ref. your other question. A little like quantum structure this. Abstractions of structure must always be partial.)

So the meat of the algorithm is generating structure across the connections in the matricies.

As far as the python goes, I read adjacency information into a graph, and am converting between networkx graphs, pygraphviz graphs, for input-output, and numpy matricies for the calculations.

As I say, I’ve managed to get a prototype working. My implementation is necessarily crude. Particularly help with the graphical display of results might be good. I believe it is possible to output a graph description “dot” file in a vector graphics format and then script interactivity for the graph in the browser. That kind of eye-candy might make the output clearer. At different times I may be outputting large graphs and I want to emphasize the both the hierarchical and segmentation structure in those graphs.

Experience with testing time-series predictions may be useful at some point. That may be the initial metric of success.

For the core algorithm I’m not sure how much help I can get. That may stay proprietary until I can get it sorted out and see what the potential is.

I have already put a lot about it on the Web. If anyone can see good ways to isolate hierarchical sub-networks from stimulus adjacency information summarized in a matrix, and do that in response to input stimuli so those input stimuli are structured, then the algorithm is easy and we can work on it together. That is all it is.

If no-one can see how to do it, I may want to take that as a commercial advantage and productize it in some way. We’ll see how my current ideas work out.

But for input-output something more polished might help.

Also I will want to parallelize the matrix operations eventually. Anyone who has experience putting those on GPU’s may be a help.

I have better clarity now, thank you. It’s fascinating stuff, and to see the actual mathematics representing those structures is wild. This week I am going to try to find you some more Python help, I think I know where to find it.

Great Steven. As you can see from my latest status update, currently I’m spending 99% of my time messing with implementation, RAM, speed, and code details, and not enough exploring the actual new method ideas.

Which of the maths links I provided did you find most interesting?

The perceptron…just to see the ability to start mathematically representing neural firing is wild. What exactly is the definition of a perceptron?

You mean the whole neural network thing? Distributed representation? There’s a lot to read about that. To me the core value is related to the complexity arguments, the number of ways you can put things together, and surprising aspects of that. We are not finished being surprised by it.