A framework for thinking about intelligence

For the serious discussion of weighty matters and worldly issues. No off-topic posts allowed.

Moderators: Azrael, Moderators General, Prelates

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

A framework for thinking about intelligence

Postby TrlstanC » Thu Sep 08, 2011 10:27 pm UTC

Edit: I went through and cleaned up a lot of little typos and clarified a few points. Hopefully it'll be easier to see how the "Find, Store, Recognize, Predict" steps can be useful for thinking about the processes that create intelligence.

------------------------------------------

I'm taking the Introduction to Artificial Intelligence class being offered online to the public by Stanford (it looks really interesting, as a class and an experiment, and you can still sign up for free). I know very little about AI, except what I've picked up from documentaries and sci-fi, so it should be an interesting and challenging class.

In anticipation I've been thinking about how to characterize different kinds of intelligence, and wrote up some thoughts on a basic framework (I know there's been other threads on defining intelligence, but I'm hoping this will be more "nuts and bolts.") It'd be great to think through some examples, see if I've made any obvious errors, and hopefully use it to think about AI.

I think the key to understanding intelligence may be to think of it as a tool for understanding the world. And the most basic way of understanding the world is to find patterns in the world because if a pattern exists it’s either 1. Caused by something in or about the world or 2. A random fluke. If we can pick out the true patterns from the flukes we can make predictions about the world and the things in it, and the fundamentals that govern it. If we want a basic framework for describing intelligence as the ability to process patterns like this the required capabilities for something to be intelligent could be thought of as:

Find patterns in observations
Store the patterns for later use
Recognize when part of the stored patterns are observed again
Predict the rest of the pattern based on the observed part

Can we build up a full model of intelligence from this framework? First let’s clarify some details for the different steps. First step - finding patterns in observations would seem to involve matching up two or more portions of our observations, either from different areas or different times, to see if they match. If parts of our observations repeat or match, then there may be a pattern there. Second step - Obviously once a potential pattern is found it has to be stored for use somehow, and the ability to store patterns is also needed for the first step (as part of comparing potential pattern segments across time or space). Third step - once a pattern is stored we can then try to match up new observations against the pattern to see if part of the current observations match part of one of our stored patterns. This won't always be a simple process - it will have to also include the ability to recognize two (or more) patterns in a single observation. For example, one stored pattern could be how a ball thrown up in the air reacts to a strong wind, and another stored pattern could be how a ball thrown to us follows a parabola, if we then see a ball thrown to us in a strong wind we can recognize both patterns. We should also include the ability to recognize nested patterns. For example, mice like cheese and will try to eat it and if anything touches the cheese on a set mousetrap it will snap shut. Our ability to understand the world is based on the ability to constantly recognize many concurrent patterns of varying complexity. Step four - once we have recognized that part of our observations match part of a stored pattern we can make predictions about what things are like outside of our current observations (either not observable, or before we started observing, or at a point in the future). In the previous examples we can predict where to stand to catch the ball, or predict who threw the ball, or predict that the mouse will get caught in the trap, ect. The more patterns we can find, store and recognize the more predictions we can make about the world. Either what underlying forces motivate things to act like they do, what set the current events we're observing in motion, or what the result of our current observations will be.

Thinking about intelligence using this framework leads to some interesting conclusions:

1. If something is missing out on one of these capabilities it could appear to be intelligent, but ultimately we would realize it isn’t. For example, a single celled organism can have some complicated reactions to it’s environment, but if all of it’s pre-programmed, the result of evolution, then it can’t react to new environments or new changes in its environment and it’s not displaying intelligence. The same is true of programmed robots, they relying on our intelligence, we're doing some (or all) of the steps for them.
2. If something has a limit on one of these capabilities, we may consider it intelligent, but only in a limited way. Consider the hypothetical example of a robot that learns to retrieve balls, it’s ability to “Find” patterns is limited to only balls, but if it can correctly learn to retrieve new and different kinds of balls – beach balls, tennis balls, ect. then we may consider it to have a limited kind of intelligence because it's carrying out all of the steps, even if the "Find" step is extremely limited.
3. How intelligent something is is not only dependent on it’s ability to process each of these steps, but also how much time it’s had to observe the world, and what kinds of observations it has access to. A super intelligent machine that’s only observed strings of random numbers wouldn’t be able to find any true patterns, it wouldn't be able to exercise it's intelligent ability. And a gifted child may not be able to make as many or as accurate predictions as someone who learns (finds and stores patterns) slower, but has the benefit of much more experience.
4. The speed with which something can perform each step is important, more patterns can be found, and more predictions can be made in the same time. This is especially so for the predict step. If you can’t make the prediction of where the ball is going to land in time to move there then it’s not very useful.
5. This definition of intelligence doesn’t include the idea of consciousness, even though our experience is mostly with ourselves: intelligent animals that are conscious. If we define intelligence this way it would seem that consciousness is not a prerequisite for intelligence, it would seem we could imagine a machine that carries out these steps that we wouldn’t consider conscious, but we would consider intelligent.
6. However, we can also propose interesting ideas for the purpose consciousness serves for an intelligent being. For example we can think of consciousness as the process of attempting to match many (or all) of our stored patterns constantly in a parallel process to our current observations. The patterns that match, or have the best potential for being true in our observations would create our conscious experience of the world. For example, our visual experience of the world could be created from our stored patterns of visual information - how different kinds of 3D objects change as viewed from different positions and times, how different surfaces react to light, ect. The patterns that we can pick out in our observations can be used to create a prediction of a 3D world.(I've got lots more thoughts about how consciousness would work using this model of intelligence, but I'm going to try to stay on topic.)
7. From my (limited) understanding of AI, the goal is usually to program a machine with sets of rules that will let it solve the problems we want it to solve. The better we can make the rules, the better it performs. But, if we look at it using this framework, we could instead think about creating a machine that doesn't have any built in rules. If it has the ability to complete these steps (find, store, recognize and predict) in the simplest and most general terms then the only thing it would need to be intelligent (at least in some limited sense) would be experience. Although, we should expect it to take a lot of experience. A human brain has many times more processing power than any machine we've built, and it still takes somewhere between 2 and 20 years for it to be able to learn inteligent tasks. If we assume we're working with machines that aren't as powerful, and have hardware that's probably optimized more for carrying out calculations than pattern recognition, then it could take years of observations for a machine to reach even very basic levels of intelligence.

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: A framework for thinking about intelligence

Postby TrlstanC » Mon Sep 12, 2011 1:34 pm UTC

After having given this framework some more thought, here are a few ideas about possibly ways to apply it – I wouldn’t even call these “hypothesis”, more like “wild speculation.” But they should be useful to give some examples about the type of process I’m thinking about.

  • What kind of brain structure would allow for this kind of intelligence? Patterns of electrical pulses entering the brain and traveling through a network of neurons reinforce the connections that they travel through. So that when the same pattern enters this part of the brain again it causes some of the neurons that would be part of the pattern to fire in anticipation. This would represent the store, recognize and prediction steps. The assumption would then be that the initial state of the neural network was set up to find patterns. This also assumes that multiple patterns can be “overlaid” on top of one another, the same neuron can be part of storing, recognizing and predicting many patterns.
  • Conscious vs. unconscious behaviors – There are two ways that patterns can be stored in the brain, an immediate and a long term way, such that the immediate way is good at reacting quickly to new patterns and the long term way is more efficient. The immediate way would represent learning that we’re consciously aware of, while the long term way would be patterns that we no longer have to think about consciously. For example, learning to walk, type, arithmetic, playing music, ect. We could think of this as synapses changing their relative connectivity for the immediate way, and neurons growing new dendrites for the long term way (just wild guesses), or we could think of the patterns happening in two different parts of the brain simultaneously, one just takes a lot longer to start and complete, but is much more efficient when it’s done. The interesting thing is that one of these kinds of patterns we’re consciously aware of (the immediate one) and the other one happens unconsciously, either because they happen in different parts of the brain, or if they happen in the same part of the brain the more “efficient” or “long term” connections aren’t part of our conscious process. But we can still activate the immediate patterns in some conditions, like if you’re thinking about your golf swing too much, and these patterns will take over and are also not as efficient – your swing isn’t going to be as smooth if you’re thinking about it consciously.
  • Extrovert vs. Introvert – This is undoubtedly a scale, everyone falls on it somewhere, with most people in the middle, but here’s some (wild) speculation about ways the ends of the scale can be differentiated. An extrovert creates models (collections of patterns) of other people that are all independent, they don’t rely on any (or many) shared elements, but because of that the model has to be relatively simple. When they interact with other people they can, and have to, update each model independently, so there’s a benefit in interacting with a lot of people. This also creates a more complicated model of social structures. And extrovert on the other hand creates a single represented pattern that stands for a generic person, but is very complex, and then modifies it to use it for each individual. So, an introvert is thinking “this is what a personality is like” and then “Tom’s personality is like that, but is more humorous.” This means that it’s a lot of work to update the model when interacting with a lot of people, and the model of the social structure is very complicated as well. But it also means that the introvert can update the model by interacting with one person and then applying those changes to everyone.
  • Music – maybe we enjoy music and find it interesting because the patterns in music progress at about the same rate that patterns in our brain can be processed. Music that’s a lot slower then what we can process is boring because we’re always waiting for it to catch up to our predictions. Music that’s too fast means we can’t make predictions fast enough to enjoy it. But if it happens at about the same speed that we naturally process patterns at then the pattern in the music is hitting us, just as we’re predicting it, which would be a nice constant positive reinforcement. The same would go for the complexity of music, if it’s too complex we can’t figure it out, and if it’s not complex enough it’s barely causes activate any patterns or reaction. And this would be different for different people, some people might be better at processing very complex patterns slowly, while others might prefer very simple patterns that progress quickly, or patterns which change a lot, or in predictable ways.

All of these point to the fact that this “find, store, recognize, predict” framework of intelligence is all theoretical, and would really a neurophysiologic backbone to rest on. That would also let us make predictions like the above, but more concrete and testable, ie. interesting.

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: A framework for thinking about intelligence

Postby TrlstanC » Thu Sep 22, 2011 12:15 am UTC

Just saw this [urlhttp://www.wired.com/wiredscience/2011/09/dyslexic-advantage/]Q&A about dyslexia[/url] with an interesting summary of some neurobiology research by Manuel F. Casanova.

the most interesting data comes from Dr. Manuel Casanova, from the University of Louisville, Kentucky. He has analyzed the brains of thousands of individuals. He’s found that, in the general population, there is a bell-shaped distribution regarding the spacing of the functional processing units in the brain called minicolumns. These bundles of neurons function together as a unit. Some people have tightly packed minicolumns, for others they are spaced widely apart.

This is significant because when the minicolumns are tightly packed, there is very little space between them to send projecting axons to make connections to form larger scale circuits. Instead the connections link many nearby minicolumns which have very similar functions. As a result, you get circuits that process very rapidly and perform very specialized fine-detail functions, like discriminating slight differences between similar cues. But people with this kind of brain tend not to make connections between distant areas of the brain that tend to support higher functions like context, analogy, and significance.

Among individuals with the most tightly packed minicolumns, Dr. Casanova found many who were diagnosed with autism. In contrast, people with broadly spaced minicolumns, at the other end of the scale, tend to create more connections between functionally more diverse parts of the brain, which can help to support very life-like memories of past events, and more complex mental simulations and comparisons. It’s at this end of the spectrum that Casanova tended to find people with dyslexia.


It's amazing that such significant changes in the structures of the brain are not only still viable, but have a fairly straightforward impact on intelligence - closely spaced minicolumns give specialized circuits and fine details, while widely spaced minicolumns give complex mental simulations and comparisons. I would've thought that the impact on intelligence would be a lot more complicated than that.

User avatar
Jplus
Posts: 1721
Joined: Wed Apr 21, 2010 12:29 pm UTC
Location: Netherlands

Re: A framework for thinking about intelligence

Postby Jplus » Fri Sep 23, 2011 4:07 pm UTC

Before I say anything related to the subject, I should point out that I scanned rather quickly through all of your text above. So I might be missing some of your points.

First, let me point out that I totally agree with you on the 'nuts and bolts perspective'. I think it's totally worth pursuing. Also, you are trying to build your own framework even though you appear to know very little about the existing literature on the subject -- that's fine! It proves that you are interested enough and creative enough to come up with your own solutions. You'll sometimes run into scientists who invented your ideas twenty years ago, and you might discover that somebody made some smart point which is clearly not compatible with your own, but that doesn't matter since you can always adjust your model.

Now, for your 'patterns' framework. You mention several steps that are involved in doing computations (or 'reasoning', if you want) on patterns from the outside world. What I'm missing in your model, is why the intelligent being, or the agent as we call it in AI, would bother to compute these patterns anyway. In other words, I'm missing some form of motivation. Suppose I'm driving my bicycle and I see a red traffic light; I can then compute that I should use my brakes, or I can decide to perform some computations on color theory. Both are in a sense intelligent (and based on pattern recognition), but the former is arguably a lot more rational.

As soon as we come to the motivation, we notice that the agent is not just spending its time reasoning about the world; it's interacting with it. This brings us to another question that your framework doesn't address yet: what does the agent need to do in order to be successful, regarding its motivation? You'll see that it depends a lot on the task and the environment; while in playing chess I can take my time to make a good decision, in fencing I'll have to react and move fast and this affects the way of reasoning.

Your steps in processing patterns (find, store, recognize, predict) assume, obviously, that the agent's 'mind' makes use of patterns. In the literature such an agent is called a model-based agent (because it maintains a 'model of the world'). You might be surprised to hear that an agent doesn't necessarily need to do such a thing. An agent that just works by some fixed set of reflexes might be very successful in some cases.

Now for the slightly more technical side of the nuts and the bolts, let's ignore the motivation and the environment for a moment and focus on your four pattern-processing steps. The first step (finding patterns) might actually be the hardest; there are techniques that are known to do the job in some cases (such as neural networks), but there is no general theory on what are the necessary and sufficient ingredients yet.
Storing a pattern isn't completely trivial either, but at least there are fairly general fields of study which relate to the problem, such as information theory and semiotics. In practice the storage of a pattern doesn't pose a big challenge in most applications.
Once you've covered finding and storing patterns, recognizing them becomes fairly trivial, because once you store a pattern you automatically have a template for recognizing it (think about that for a moment). You see this clearly in neural networks: there is actually no difference between a pattern being stored in a network and the network being able to recognize the pattern.
Finally, you could maintain that computability theory has solved the question of how to predict what will happen after you recognize a pattern. Either your pattern is a total recursive function, in which case any Turing-complete machine will be able to do the computation, or it is not, but that would be unlikely because then the question is raised how you were able to find the pattern in the first place.

I hope my post seemed relevant enough to you. Some of my remarks might have been made during your course as well, in the meanwhile. You may also want to look up some of the terms that I used in this post at wikipedia.
"There are only two hard problems in computer science: cache coherence, naming things, and off-by-one errors." (Phil Karlton and Leon Bambrick)

coding and xkcd combined

(Julian/Julian's)

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: A framework for thinking about intelligence

Postby TrlstanC » Sat Sep 24, 2011 2:53 pm UTC

Jplus wrote:What I'm missing in your model, is why the intelligent being, or the agent as we call it in AI, would bother to compute these patterns anyway. In other words, I'm missing some form of motivation.

Agreed, this model doesn't include any idea of what to do with the patterns. For example, if someone is tossing me an orange, I want to move to get in the way so I can catch it. But if someone is throwing an orange rock at me, I want to move out of the way so it doesn't hit me. I could imagine a machine that's intelligent (by this definition) that just spits out predictions, but I think something like that would have to be built, it couldn't evolve on its own. But if we take intelligences as one tool, or one process, I can imagine two ways it could be used.

1. Hardwired. For example, a simple organism that just hunts for sugars. It's got some kind of chemical receptors, some way to move around in the world, and maybe a couple other senses. It can start by moving around in the world randomly. It finds patterns as it moves around in the world, and if those patterns have a positive correlation with also finding sugars, then those patterns get used. Or, conversely patterns which are negatively correlated with finding sugars get avoided. I think this would be an example of something which is intelligent, but not conscious, it doesn't create a model of itself or the world.

2. Guidelines. Here is where I think consciousness could be another tool which is used with intelligence to make decisions. If we think of pain and pleasure as "guidelines" - they contain information about what sorts of things are good, and what aren't. The differences between the "hardwired" rule in the previous example, and a guideline here is that pain and pleasure are part of a model the agent creates of the world. There are also other qualia the agent could use (color, sounds, ect.) but pain/pleasure are different because they contain additional information i.e., "these kinds of things are good, these kinds of things are bad." The major advantage here would seem to be adaptability. In the first example, the hardwired agent can't do anything with a completely new pattern. If it doesn't have any experience correlating the presence of sugar with some sensory information then the information isn't useful. However, if we have a conscious agent it can apply the pleasure it gets from sugar to new and unexpected patterns.

When we start talking about consciousness the ideas we're working with get a lot more complicated. But just to take humans as an example, I think we can say that everything we do is, at it's root, designed to increase pleasure, and/or avoid pain. We have just evolved very complicated ways of using these feelings. For example, humans are one of the few animals that inherently enjoy working together, this is a great survival tool, but it's also expressed as a feeling of enjoyment/pleasure. The question I'm really stuck on is: would it be possible to have a sense of pain or pleasure without being conscious? My initial reaction is no, but I certainly haven't convinced myself.



Jplus wrote:Now for the slightly more technical side of the nuts and the bolts, let's ignore the motivation and the environment for a moment and focus on your four pattern-processing steps. The first step (finding patterns) might actually be the hardest; there are techniques that are known to do the job in some cases (such as neural networks), but there is no general theory on what are the necessary and sufficient ingredients yet.

I thought that this would be the hardest step too, just looking at the history of AI, being able to find even (what we would think are) very simple patterns turn out to be huge computational problems. My first thought is that our computers might be good at crunching numbers, but bad at being intelligent, that there are just other physical structures that are better at reacting to patterns. One idea would be a "guess and check" setup. Feed data with patterns in it in to one side of a neural network, and have a copy of the data feed to the output of the neural network as well. Any part of the neural network that correctly predicts the pattern (it matches up with the data fed to the output) is reinforced, any part that doesn't match up is disrupted. There would have to be some kind of time lag inherent in the comparison, since the network would only be useful if it creates a prediction before the pattern gets to that point anyways.

We could imagine how this type of intelligent network would work with the "hardwired" motivation above. Not only is the output compared with sense data, it's also compared with the sugars sensing chemical receptor, and only the parts of the network that match both criteria (predictive and result in sugars) are reinforced. Imagining how the conscious model would work would be a lot more complicated, I'd think that it would involve layers upon layers of intelligent networks to create a complex model, which is all guided ultimately by the senses of pain/pleasure.

User avatar
Jplus
Posts: 1721
Joined: Wed Apr 21, 2010 12:29 pm UTC
Location: Netherlands

Re: A framework for thinking about intelligence

Postby Jplus » Mon Sep 26, 2011 11:13 am UTC

TrlstanC wrote:1. Hardwired. For example, a simple organism that just hunts for sugars. It's got some kind of chemical receptors, some way to move around in the world, and maybe a couple other senses. It can start by moving around in the world randomly. It finds patterns as it moves around in the world, and if those patterns have a positive correlation with also finding sugars, then those patterns get used. Or, conversely patterns which are negatively correlated with finding sugars get avoided. I think this would be an example of something which is intelligent, but not conscious, it doesn't create a model of itself or the world.

2. Guidelines. Here is where I think consciousness could be another tool which is used with intelligence to make decisions. If we think of pain and pleasure as "guidelines" - they contain information about what sorts of things are good, and what aren't. The differences between the "hardwired" rule in the previous example, and a guideline here is that pain and pleasure are part of a model the agent creates of the world. There are also other qualia the agent could use (color, sounds, ect.) but pain/pleasure are different because they contain additional information i.e., "these kinds of things are good, these kinds of things are bad." The major advantage here would seem to be adaptability. In the first example, the hardwired agent can't do anything with a completely new pattern. If it doesn't have any experience correlating the presence of sugar with some sensory information then the information isn't useful. However, if we have a conscious agent it can apply the pleasure it gets from sugar to new and unexpected patterns.

One word: utility function. I'm quite amazed that apparently you reached this insight by yourself, so soon.
I certainly think it's problematic to speak in terms of consciousness, especially if you don't define the notion. Besides, isn't it quite likely that pain and pleasure are hard-wired?
"There are only two hard problems in computer science: cache coherence, naming things, and off-by-one errors." (Phil Karlton and Leon Bambrick)

coding and xkcd combined

(Julian/Julian's)

Fedechiar
Posts: 44
Joined: Sat Jan 29, 2011 10:35 pm UTC

Re: A framework for thinking about intelligence

Postby Fedechiar » Mon Sep 26, 2011 1:53 pm UTC

Jplus wrote:
TrlstanC wrote:1. Hardwired. For example, a simple organism that just hunts for sugars. It's got some kind of chemical receptors, some way to move around in the world, and maybe a couple other senses. It can start by moving around in the world randomly. It finds patterns as it moves around in the world, and if those patterns have a positive correlation with also finding sugars, then those patterns get used. Or, conversely patterns which are negatively correlated with finding sugars get avoided. I think this would be an example of something which is intelligent, but not conscious, it doesn't create a model of itself or the world.

2. Guidelines. Here is where I think consciousness could be another tool which is used with intelligence to make decisions. If we think of pain and pleasure as "guidelines" - they contain information about what sorts of things are good, and what aren't. The differences between the "hardwired" rule in the previous example, and a guideline here is that pain and pleasure are part of a model the agent creates of the world. There are also other qualia the agent could use (color, sounds, ect.) but pain/pleasure are different because they contain additional information i.e., "these kinds of things are good, these kinds of things are bad." The major advantage here would seem to be adaptability. In the first example, the hardwired agent can't do anything with a completely new pattern. If it doesn't have any experience correlating the presence of sugar with some sensory information then the information isn't useful. However, if we have a conscious agent it can apply the pleasure it gets from sugar to new and unexpected patterns.

One word: utility function. I'm quite amazed that apparently you reached this insight by yourself, so soon.
I certainly think it's problematic to speak in terms of consciousness, especially if you don't define the notion. Besides, isn't it quite likely that pain and pleasure are hard-wired?


Pain and pleasure are probably hard-wired, as they result from adjustments in the hormone balance (drugs probably prove the point, as you can get pleasure without doing the thing it's a reward for). We also tend to think of morals in a dual way without shades of gray (obviously, concious efforts not to do so alter this, but the concious effort is probably the result of a dualistic good/bad decision), and morals are probably a framework to use for decision-making. The mechanism, though, must be more complex, as decision-making can be quite hard in some situations. The correlation between intelligent decision-making and the evolution of morals might be interesting - will AIs evolve a moral framework as they grow smarter? Will it be similar to ours?

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: A framework for thinking about intelligence

Postby TrlstanC » Mon Sep 26, 2011 3:41 pm UTC

Jplus wrote:One word: utility function. I'm quite amazed that apparently you reached this insight by yourself, so soon.
I certainly think it's problematic to speak in terms of consciousness, especially if you don't define the notion. Besides, isn't it quite likely that pain and pleasure are hard-wired?


I definitely agree that it would be best to hold off on talking about consciousness and intelligence at the same time since both are difficult concepts by themselves. However, I end up thinking about intelligence after consciousness, so it's probably worth stating a couple assumptions I'm making:
  • Consciousness is created by a predictive process, and qualia is a defining quality of consciousness (for more info on exactly what I'm talking about see conscious thoughts)
  • Pain and pleasure are different than other kinds of qualia (color, sound, ect.) because they carry information i.e., this is good, that is bad.
  • Consciousness is not a prerequisite for intelligence, in addition I come to the conclusion that intelligence probably is a prerequisite for consciousness though.

So, I think we can talk about intelligence without having to talk about consciousness, even though we only have experience ourselves with using both, and furthermore we usually experience our intelligent processes through the "lens" of consciousness. Hopefully though we can consider intelligence by itself. One of the first steps to doing this is probably to consider that utility functions as a specific class of senses. For example, we have lots of nerves that transmit signals that are eventually experienced consciously as pain or pleasure, but we could imagine an agent whose utility functions were just defined as "get these sensory nerves to fire as often as possible while keeping these sensory nerves from firing too often." The difference here is that there's no "experience." How well this kind of utility function works would be dependant on the balance between the difference types of "utility senses."

Then we would have an intelligent agent that has these capabilities:
1. "Information" senses - these would feed in to the intelligence processing area (find, store, recognize, predict patterns)
2. "Utility" senses - there would be a check to see if these correlate to the output of the primary intelligence area, which would make decisions about what to do with the output. In fact there would probably need to be several layers of intelligence/utility to deal with complex patterns.
3. Output/motor control - this is where the output would go to change the way the agent interacts with the world.

Given these assumptions, is it possible to have an intelligent system that works like proposed (find, store, recognize, and predict patterns)? I think the most difficult part would be finding the patterns, but we could also assume that that ability is so useful that dedicating a large percentage of the agent's resources to that one goal would be worthwhile. And then, is this the kind of intelligence that we actually observe, in ourselves, or other animals? Is it something that we think we could build a machine to do? And would it be easier to build a computer which can carry out the calculations necessary to create this behavior, or is there a more efficient structure then logic gates e.g., something that wasn't binary? Another possibility that this framework raises is that we would need to give any intelligent agent we're trying to create lots of time to gain experience.

User avatar
Daywraith
Posts: 36
Joined: Fri Sep 26, 2008 3:52 am UTC

Re: A framework for thinking about intelligence

Postby Daywraith » Sat Oct 08, 2011 6:16 am UTC

Interesting thread. I just had a quick read through and have a couple of thoughts/questions.

I think your second rule about storing the(new?) patterns for later use isn’t needed for Intelligence, it probably is required for consciousness and/or learning.

For example it's fairly common practice in AI to train a neural network on a data set then freeze the synaptic weights thus preventing it from learning or storing new information. I think I would be hard pressed to say that an agent was no longer intelligent simply because it couldn't learn. At the very least its behaviour which hasn't changed must be considered intelligent.

A real world example may be someone suffering from Anterograde amnesia (the loss of the ability to create new memories), does such a person no longer qualify as intelligent under your system? Ignoring consciousness for a second you could argue that patterns previously learned would make some qualify as intelligent but then shouldn’t instincts count as well?

The second thought is your use of the terms Pain and Pleasure. I think it would be better to use Happiness/Sadness when talking about consciousness and Pain/Pleasure when talking about senses. I think there is a clear differentiation point between these two. I would also argue that human individuals seek to maximise happiness rather that pleasure and to be honest I’m not sure that even true.

I really like your definition of intelligence. Most definitions of intelligence end up with a black box determining optimal behaviour, which normally needs an environment and a definition of optimal.

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: A framework for thinking about intelligence

Postby TrlstanC » Tue Oct 11, 2011 3:17 pm UTC

Daywraith wrote:For example it's fairly common practice in AI to train a neural network on a data set then freeze the synaptic weights thus preventing it from learning or storing new information. I think I would be hard pressed to say that an agent was no longer intelligent simply because it couldn't learn. At the very least its behaviour which hasn't changed must be considered intelligent.


Agreed, I don't think that all 4 steps have to be happening all the time for something to be intelligent, but they should've all happened at some point. For example, a neural network that is trained, and then frozen, wouldn't learn any new patterns, but it would continue to use the pattern it had recognized and stored previously. This is different then a situation where someone "programs" a neural network and then uses it to match patterns. In the second case the network didn't do any learning on it's own, it never found or stored any patterns.


Daywraith wrote:The second thought is your use of the terms Pain and Pleasure. I think it would be better to use Happiness/Sadness when talking about consciousness and Pain/Pleasure when talking about senses. I think there is a clear differentiation point between these two. I would also argue that human individuals seek to maximise happiness rather that pleasure and to be honest I’m not sure that even true.
Well, I was assuming that consciousness implied the experience of senses i.e., qualia. And that consciousness was a necessary to have that kind of experience. I was also assuming that when we're happy or sad we're actually experiencing pain or pleasure in someway, even if it's not a direct "physical" sensation.

I'd actually wonder if the opposite is true, if it's possible to be happy or sad without being conscious?

User avatar
TrlstanC
Flexo
Posts: 373
Joined: Thu Oct 08, 2009 5:16 pm UTC

Re: A framework for thinking about intelligence

Postby TrlstanC » Wed Oct 12, 2011 4:08 pm UTC

Ignoring the interactions with consciousness for a moment, here's an interesting conclusion about our progress on AI based on these ideas:

If the intelligence of an agent improves with both a) better pattern finding/storage and b) more experience then we would expect to see two strategies for developing AI. One that focuses on creating an agent with whatever hardware (and algorithms) are available currently and then trying to get the agent as much experience as possible. The other would be to develop better hardware with the goal that time spent improving the speed will be better used than time spent gaining experience with outdated hardware. I've seen some attempts at the former, but the vast majority of current AI research seems to be based on the later.

There is of course the possibility that we don't need to make a choice, that it's not necessary to start over gaining new experience from scratch every time we develop better hardware or algorithms, but this doesn't seem to happen in AI. This is in contrast to the development of other technologies e.g., genome mapping. When the first complete human genome was mapped the majority of the sequences were mapped in a relatively small amount of time at the end of the project, but they didn't need to start over, they were able to keep the progress that had been made on older and outdated technology. But we don't see this with AI, we don't see a simple AI program build up some experience and store some patterns to be further built upon by a newer AI agent. Instead every new agent starts from scratch.

One conclusion we could draw from this is that the only way we've tried handling the storage step, as a separate digital process, isn't an efficient way to do it. It's certainly not the way that intelligent animals store their experiences, which is neither digital nor separate from the rest of the processing. Of course the characteristics of an agent which was designed and one that evolved will be different, and one goal of AI could be to overcome the limitation of intelligent animals that when they die their experience is lost. So, I can see the perceived benefit to either digital or separate storage. And this doesn't mean that both characteristics are fatal flaws either, or even that both of them are inherently inefficient, it's just that we've only really tried storage in one particular way, as both digital and separate. It may be that analog storage is better, or it may be that a combined storage & processing architecture is better, or it could be that both would be an improvement. Either way, I would expect that if we do find a better way to store the recognized patterns i.e., experiences, of an intelligent agent then one characteristic it will have will be that we won't have to start over from scratch every time we develop better hardware.


Return to “Serious Business”

Who is online

Users browsing this forum: No registered users and 10 guests