I think I've got the concept from what I've read so far of Paul Graham's "On Lisp".
Basically what lisp allows you to do is modify the language to suit your program, while building your program around that new framework. So for example, imagine you need constructs specifically for numeric processing - you can build them into the core language's syntax. This is like writing functions, but on steroids.
This sounds ideal for hacker types, and for proficient lispers. The problem with it is the same as if I came to work on some code someone had written using lots of new Perl syntax (say constructed with source filters). If I had to work on some perl 5 code that contained completely new constructs, I think it would be an awfully steep learning curve (even if it was really quick to develop initially for the developers).
Of course it's very different to perl and source filters, since in lisp, everything looks the same (braces with params). So it's not like you can invent a completely new syntax (or I haven't gotten that far into the book yet).
Still, when all's said and done, I think the syntax has been one of lisp's biggest downfalls. As kids we found basic easy to grasp (ok, we found logo easy to grasp - but we couldn't really build anything serious with it ;-). Learning on basic and progressing to anything that looks completely different like lisp (or haskell or ml or whatever) is really hard. Progressing to something similar, but more powerful (heck, even Assembler is vaguely similar) is a lot easier for our minds to grasp.
Anyway, just some random thoughts on my recent readings. Hope you enjoyed them ;-)
I've been playing with LISP a bit myself as well, I didn't find the syntax much of a problem once I was used to it, although it can sometimes be a right pain nesting things properly when you add something into the middle of an expression. I think with more time using it I'd stop doing that. Also the barrier to thought isn't provided by the syntax, the language itself is different and does require a new way of thinking.
It occured to me while I was toying with LISP that it was the first language I've learnt in a while that caused me to think differently about problems in other languages. I first learnt BASIC and visual basic, then I learnt Perl and found that I couldn't just write BASIC in Perl. After that I learnt C and Java, but found I could think about Java programs in Perl, as it were. I've never found myself thinking in a Java way for Perl though. With LISP I'm starting to think LISPy thoughts while I write Perl. All in all very enlightening.
As for domain specific enhancements... I think in the end these are a win, as they aren't any different from a private set of libraries. I expect that perl 6 will make it a lot easier to create these for perl, and also expect that people will make use of them, even going as far as writing their own little language for CPAN modules.
It's all about bottom-up programming, with the wrinkle that your notation for solving the problem is part of the API. (Imagine what GTK would look like if it wasn't full of varargs hacks...) I'm not sure how I feel about that though; it sounds like it's the origin of the Tcl problem -- dozens of dialects, all slightly different. May work well in the small, but don't know how it can possibly scale in the large. Perhaps Common Lisp is better suited for a CCLAN that contains lots of (reusable, interoperable) macros...don't know yet. I guess we'll find out when Perl6 hits the streets, or there are many more source filters for Perl5.
As to changing Lisp's lack of syntax, I do believe that's possible, depending on the macro system. There was some theory about partial evaluation with Scheme. The general idea was to do some rudimentary translation on some random syntax (Perl, Python, Java) and "compile that down" to Scheme, so all you would really need is a Scheme interpreter. Of course, the theory worked better in theory than in practice.