Recently a problem at work cropped up for which I felt an "AI"* solution might be worth exploring. Despite my dabbling in the area, I didn't know where to start. While I didn't have much hope for an answer, a quick email to the Perl AI mailing list yielded many interesting suggestions. Currently, I am playing with AI::NeuralNet::Mesh. My first attempts at teaching my computer binary resulted in a rather slow random number generator. Another email to the list showed me the error of my ways. Later, I'll be off to Powell's books to find some good resources about Neural Nets and pattern recognition.
Curiously, my link to the AI module points to the CPAN Testers page because I can't actually find the module on the CPAN. I was able to install it with CPAN.pm, though, which just tells me that I have plenty more to learn about how the CPAN works.
*Why did I quote "AI"? There's a weird contradiction in AI research. Teaching computers to behave like human brains has, for the most part, been a failure. They don't do a good job of it. As a result, many traditional AI problems are called AI when we can't solve them, but are no longer AI once solved. Chess, for example, is generally brute forced. That's not AI. The same goes for speech recognition and similar problems. Some people mistakenly thing that the type of work I was doing in Prolog is AI, but that's actually logic programming and not the same thing at all -- though some argued that it was when it was first in vogue. I guess the actual distinction we make in AI becomes "once the human mind understands what's going on, it's no longer AI". That strikes me as a tad hubristic. If we ever develop a computer that passes the Turing test and claims to be self-aware, will we deny that it's intelligent if we think we know how it works?
In this particular case, look here.
I guess the actual distinction we make in AI becomes "once the human mind understands what's going on, it's no longer AI". That strikes me as a tad hubristic.
That does seem to be exactly how it works
I don't think that is a totally fair way of putting it. What seems to happen is that a problem is proposed that exemplifies (or seems to) something that only human thinking works for. AI people work on such a problem, thinking that a solution to this problem might help lead to a solution of general human thought. In the end, though, the best solutions to thses problems always seem to end up being problem-specific heuristic approaches that do not generalize to being a solution to thinking processes in general.
Chess programs, for example, are considered not to be AI because (1) they clearly do not mimic human thought patterns since they have to analyse huge orders of magnitude more positions just to break even with human thought - they are certainly not thinking about chess as well as humans (2) these programs do not help writing programs to discuss Shakespeare, or debug Perl scripts, or vacuum a house, or other "simple" human tasks. Now, if someone were to find the way to write a chess program that would have an intuitive feel for which positions need to be analysed and only analyse them - that would truly be AI, even if it didn't beat the brute force programs; especially so if this method of developing "intuition" was not hard-coded chess knowledge, but knowledge about how to work with both hard-coded and learned chess knowledge and reason from them together.
Perhaps you meant heuristic? "If we don't understand it, it's AI."I guess the actual distinction we make in AI becomes "once the human mind understands what's going on, it's no longer AI". That strikes me as a tad hubristic.