LOGIN
Rick Stone
Community Helper, Finance Guy, Ex-Lawyer
 · New YorkU.S.
Edit
Delete
Share
Report
QUESTION
  ·  Mark as Answered

Rob,

How does your idea differ from DeepLearning? I just read an article in MIT Technology Review which discusses DeepLearning, MachineLearning and AI - what is it about your idea that is distinguished from the technology discussed in this article:

Is Artificial Intelligence Finally Coming into Its Own? technologyreview.com/s/513696/deep-…

1 like 
Like
Share
Rob Freeman
Label resister (Project Leader)
 · Los AngelesU.S.
Edit
Delete
Share
Report

Hi Rick Reubenstone I wrote this in reply to Steven Reubenstone question earlier:

"What I am attempting differs from deep learning mostly in that I hypothesize the sub-networks interact chaotically. So the emphasis is on generating structure, not learning it.

The hypothesis is that perception comes from patterns in the generated structure. It cannot be learned. (Ref. your other question. A little like quantum structure this. Abstractions of structure must always be partial.)"

As I said also in another reply to Steven, it relates in some ways to his own perception of a “quantum” quality to cognition: wave equations, uncertainty.

There’s some maths about this too. E.g. Gregory Chaitin and indeterminacy in maths.

I came at it from the angle of language, where after many years of failing to find grammars it seemed to me that perhaps grammar could not be found, not completely. Then I started finding other people saying the same thing, and results in mathematics, computer science, physics, along the same lines.

Chaos theory also somewhat along the same lines: inability to summarize, and indeterminacy if you try.

It is still quite fresh perspective. Some people have become quite excited about it. See for instance Steven Wolfram who proclaims “A New Kind of Science”.

I like Gregory Chaitin who has written quite a bit from a mathematical perspective. It struck a strong chord with me recently when he announced a basis for mathematical “creativity” in the kind of indeterminacy we are talking about.

So there are many directions you can take this. I just happen to have come at it from language, which is very close to thought, and yet very concrete. So I think I am closer to concrete applications to cognitive structure.

Deep learning doesn’t take this into account. It uses distributed representation which naturally captures some aspects of it, but assumes a very traditional model of structure: learnable, reductionist. So they miss most of the power of that distributed representation.

Simply by turning what they do around, by not assuming structure is learnable, but that it might grow and be more powerful than we imagined, we might access more structure, and improve our models.

As well as explaining some of the more mysterious aspects of cognition, like creativity (structure becomes spontaneously more complex), and freewill (even the creator of such a system does not know what it will do until it does it), and even the “larger than itself” mystery of consciousness.

Easy to do. But slow to filter to the mainstream, because it will require them to drop some old assumptions, and most of their actual tools.

1 like 
Leave a reply...
DISCOVER
CHAT
ALERTS
-2
DISCUSSIONS
-1
FEED