Uche Ogbuji has a few choice words for those who doubt the need and power of RDF:
RDF is for people who understand directed graphs. If you take any random audience, this is, of course, a small proportion. Same story for forensic histology, but I doubt Sean would moot for closing down all the crime labs.His words are in response to Sean McGrath's article in ITWorld. Sean asserts that every layer of abstraction reduces your audience by half (Bosworth's principle).
Sobering stuff. Abstraction extracts such a terrible price in return for the benefits of complexity management it bestows on the chosen few. Abstraction creates a high priest environment in which only a few can ever hope to really understand the "vision" buried in all the abstraction. In the hands of the chosen few, the abstractions are a precision tool wielded to powerful effect. In the hands of the other 94%, the tool is more like a monkey wrench. A tool that can be used for every job but is the *wrong* tool for every job.Uche is missing the point though.
In modern society, there is a very small proportion of the general population that builds and maintains the networks we use daily (roadways, TCP/IP or otherwise). You can call them a "high priesthood" if you want, but that's just confusing the issue. Relying on a small group to provide infrastructure isn't necessarily a bad thing. If everything works right, 94% of the world shouldn't care about infrastructure. Sean's argument is a straw man.
What Uche is missing is that there's a price to pay for specialization and hyperspecialization. Perhaps maintaining a (transportation|TCP/IP) network is a specialized skill, and it really is necessary to pay specialists to worry about the details to keep everything running. At the core, though, the basic concepts are rather simple. A small business owner can walk into Circuit City with $250 and get all the equipment he needs to get 80% of the work done. Similarly, thousands of homeowners engage in "amateur roadbuilding" every winter -- it's called shovelling the sidewalk. Again, not a perfect solution, but 80% of the work done with less than 20% of the effort, and it's not so abstract that no one understands it.
Look at HTML. Very few people who cobble together web pages are graphic designers or are skilled in the art of information architecture or page design. But HTML is simple enough to get the job done -- both by the professionals and by the amateurs looking for an 80% solution. The problem with RDF is that it totally ignores the masses. As Sean puts it, RDF focuses on the top 4%, neglecting the 96% majority. RDF is so complex that it's really just meant for hyperspecialized professionals, as Uche would have you believe. If we had to rely on marine biologists, geologists and particle physicists to perform all of the work to build our (canal|road|TCP/IP) networks, we'd still be stuck in the dark ages.
With tools like RDF geared exclusively for hyperspecialists, it's not surprising that the Semantic Web is the biggest letdown in the computing since the big AI push of the 1980s.
Re:the semantic web will not be televised
ziggy on 2003-03-25T17:47:42
Not really. The whole point of the semantic web is to facilitate automatic discovery of what's what. XML is relevant because the whole web should be XML (self describing data), but web services are less a part of the semantic web. Sure they facilitate machine-to-machine communication, but the key stumbling block is finding meaning in the web.Perhaps I'm mistaken, but I see XML and web services as an outgrowth of the semantic web effort so far: and both seem to have made a substantial impact so far.That's not what I said. RDF needs to be ubiquitous, but it is not engineered to become ubiquitous -- it is engineered to express formal models made by formalists. HTML also needed to be ubiquitous as well, and it was created in a manner that supported its ubiquity. Ease-of-learning/ease-of-use is an important secondary effect, but it is still secondary.You are right though, RDF is not as easy to pick up as HTML.Re:the semantic web will not be televised
Elian on 2003-03-25T18:00:20
Not really. The whole point of the semantic web is to facilitate automatic discovery of what's what. XML is relevant because the whole web should be XML (self describing data), but web services are less a part of the semantic web. Sure they facilitate machine-to-machine communication, but the key stumbling block is finding meaning in the web.
If my time at a search engine is any indication, the true meaning of the web involves a rather inordinate amount of flesh-toned GIFs and JPEGs...
The biggest problem I've always seen in the whole "automatic discovery" and "meaning extraction" parts of the semantic web stuff is the fact that so much of the web actively lies about itself when asked, and that's one thing I don't see any way around.Re:the semantic web will not be televised
darobin on 2003-03-27T14:07:41
It only works in a "web of trust", using digital signatures and trust propagation to function. That's why one of the first steps of the SemWeb was to create security specs for XML content. That's how FOAF works, your assertions are only indexed if you've signed your FOAF file properly, and then clients can choose to trust only stuff that comes from people you yourself trust or are trusted by those at n degrees of separation.
Re:the semantic web will not be televised
inkdroid on 2003-03-25T18:05:00
OK, I think I missed what you were getting at:RDF needs to be ubiquitous, but it is not engineered to become ubiquitous -- it is engineered to express formal models made by formalists.HTML was formally defined (SGML DTD), but browser makers fortunately made them forgiving...so HTML spread. Perhaps there isn't room for such looseness in finding meaning with RDF..or maybe there is?
Re:the semantic web will not be televised
ziggy on 2003-03-25T19:25:17
I think you're getting lost in the technical details. HTML's success isn't nearly as complex as you seem to think it is.HTML was formally defined (SGML DTD), but browser makers fortunately made them forgiving...so HTML spread.HTML is nothing more than tagged text. Period. Here's a paragraph, this section is bold, that's a bulleted list of items, and over there are some images. It's a very simple technology -- the only tool you really, truly need is vi, emacs or notepad. HTML succeeded where other attempts failed because HTML didn't need a specific authoring environment (like HyperCard, AuthorWare, Director, Shockwave, Flash), and the tagging was actually easy to identify and use (unlike *roff, PostScript, RTF, *TeX, etc.), and dealt with issues that people can easily understand (bold, italic, paragraph).
Another benefit of HTML is that it took a fabulously and notoriously complex format (SGML) and simplified it to the point where ordinary people can understand it. As a format that aimed for mass adoption, this was a requirement.
Actually loosenes of interpretation is the bane of HTML processing. That is one of the goals behind using XML to create new formats -- to reduce ambiguity. The core problem with RDF isn't the strictness of interpretation, but the huge impedence mismatch between reality and RDF's view of resources, subjects, properties and values.Perhaps there isn't room for such looseness in finding meaning with RDF..or maybe there is?Re:the semantic web will not be televised
inkdroid on 2003-03-25T19:34:27
I think you're getting lost in the technical details. HTML's success isn't nearly as complex as you seem to think it is.Nah, I'm not lost. I remember what happened
:) People saw that they could write invalid HTML, and browsers would accept it. In fact, nobody ever knew that their HTML was invalid, it just worked. Point taken it is the bane of HTML processing. I guess I just don't find the concept of RDF to be so complicated in the first place...and don't feel like much of a high priest either. Re:the semantic web will not be televised
ziggy on 2003-03-25T19:59:51
How do you phrase these statements in RDF unambiguously?No one is really complaining about watered down RDF as it exists with RSS 1.x or FOAF. And that's a very simplified view of RDF engineered to be widely adopted and not require high priests standing on the high altar of graph theory.Jarkko Hietaniemi is currently the release manager for the current stable version of Perl. He was the release manager for the 5.7.x development releases, starting mid-2000, and is now responsible for maintenance updates to the 5.8.x release tree. Gurusamy Sarathy preceded Jarkko as the release manager for the 5.5.x development releases and the 5.6.x stable releases. (Releases with an odd middle digit are development versions that precede a stable version, which use the next even digit.)Re:the semantic web will not be televised
inkdroid on 2003-03-25T20:03:14
Watered down RDF I understand. Perhaps *you* are getting lost in the technical details and want me to be as well.Re:the semantic web will not be televised
inkdroid on 2003-03-25T20:12:01
Actually, I take that back. Perhaps I don't understand enough to be confused:)
Re:They're both missing the point
darobin on 2003-03-27T14:15:41
But what makes RDF any different than, say, a multimeter, surgeon's scalpel, saxophone, or garbage collection algorithm?
One of the stated goals of RDF is to get metadata out there on the Web. For that to happen, people need to be putting it there. If it is hard enough that even people that understand directed graphs, the RDF model, RDF Schema, and XML really well would rather not do it, somehow it has failed...
--Nat
Re:The high priests of RDF
darobin on 2003-03-27T14:16:18
I can second that, the parts of Shelley's book that I read were really good.
Re:The high priests of RDF
pudge on 2003-04-09T02:46:27
Shelley is writing it? I hope it is free of bull shit, but I am not optimistic.:-)
Google's Sergey Brin does not either believe in in it:
Interview with Sergey Brin, Google Co-FounderHe basically said he doesn't believe in the semantic web as a set of linked RDF data-structures. His basic argument is that the structure of natural language and what it presents is much much richer than meta-data tagging schemes.
Ancient wisdom (1983) is conveniently forgotten:
Any counter argument?
SUO: Enlightened Semantic Web
Google's Sergey Brin does not either believe in in it:
Interview with Sergey Brin, Google Co-FounderHe basically said he doesn't believe in the semantic web as a set of linked RDF data-structures. His basic argument is that the structure of natural language and what it presents is much much richer than meta-data tagging schemes.
Ancient wisdom is conveniently forgotten:
RJ vs AI: Science vs Engineering?"If AI has made little obvious progress it may be because we are too busy trying to produce useful systems before we know how they should work." -- Marcel Schoppers (1983!)
Any counter argument?
(Sorry for the suplicate, tiny fixed size edit window sucks, can't see the whole lines)
Re:No need to worry.... (fix)
ziggy on 2003-03-30T18:24:33
Nope. Sergey's points are right on. And they also illustrate why the semantic web won't bootstrap itself. Regardless of the pros and cons as they exist today, the semantic web is doomed tomorrow if it cannot bootstrap itself. (I've been waiting over 3 years to see RDF bootstrap into something useful and non-trivial.)Any counter argument?The semantic web of RDF metadata is permanently doomed because it is too difficult for people to create that metadata -- either using Notepad or using non-existant RDF tools to make annotations. Perhaps if it was easier to write tools to create/manipulate RDF, that would change; as it stands, the RDF tools I've seen are either very watered down, or academic curiosities.
It's nice that Google has the manpower and brainpower to make something out of natural language metadata. As long as it takes a significant R&D investment to create and process metadata, the semantic web cannot bootstrap itself. So perhaps Google will do something interesting, but it won't be create the semantic web...
Re:No need to worry.... (fix)
kevembuangga on 2003-03-31T18:00:19
I am not too sure that Google has really any interest or real capability (in spite of the manpower and brainpower) to do much about bringing semantics to the Web.
See my arguments with Danny Ayers about what I think to be the REAL problem with computerized semantics.
There is trouble well before the "difficult[ies] for people to create that metadata".
I like wery much your "bootstraping" idea. It is obvious that, given the HUGE amount of knowledge we will have to feed in before anything worthwhile becomes usable, the process must be automated as soon as feasible.
Otherwise this whole Semantic Web idea will go the way of Cyc (that is down the toilet...)