Tag Archives: Tim Berners-Lee

Zen and the Art of Software: The Innovation Interview with Grady Booch (Part 2)

In the pantheon of world-famous computer scientist’s, Grady Booch is the star who co-authored the Unified Modeling Language (UML) and co-developed object-oriented programming (OOP). He is a Fellow of  IBM, ACM, the IEEE, the author of six books, hundreds of articles and papers and the Booch Method of Software engineering. Grady serves on several boards, including that of the Computer History Museum and is the author, narrator and co-creator of what could be seen as a historical magnum opus of the technological world, COMPUTING: The Human Experience. 

To view the full introduction to this multi-part interview with Grady, and Part 1 of the series: Click here

Grady Booch: Capital I Interview Series – Number 14

[This was a joint conversation between Grady, Michael and myself. I’ve italicised Michael’s questions to Ora so you are able to differentiate between us – though, I think it will become obvious as you read – lol!]

Grady, lets begin with the very basics. As this is the Innovation Interview Series, let’s start with: how do you define innovation?

Ecclesiastes 1:9 has this great phrase:

What has been will be again.  What has been done before will be done again.  There is nothing new under the Sun“.

The way I take it is that innovation – really deep innovation – is about warming the Earth with the Sun of your own making. And to that end, that’s how I distinguish the ‘small i’ from the ‘Big I’.

The ‘small i’ therefore means: I may have a brilliant idea and it warms me, but the ‘Big I’ Innovation is where I can start warming others.  There are new suns possible; there are new ways of warming the Earth… And I think innovation is about doing so.

One of my heroes is the physicist Richard Feynman. If you read any of his stuff or watch his physics lectures – which are just absolutely incredible [Ed. Note: As is his series: The Pleasure of Finding Things Out] – there are some conclusions you can draw (and there is a delightful article someone wrote about the nine things they learned from Feynman.  The way I frame it is to say that I admire him and his innovation because he was intensely curious but at the same time he was bold, he was not fearful of going down a path that interested him. At the same time (too) he was also very child-like and very, very playful.  In the end what really inspires me from Feynman’s work is he was never afraid to fail, but much like Joseph Campbell observes, he followed his bliss.

Richard Feynman

I think that many innovators are often isolated because we’re the ones who are following our bliss; we really don’t care if others have that same bliss.  We are so consumed by that, that we follow it where it leads us, and we do so in a very innocent, playful way… We are not afraid to fail.

I’ve noticed that there is often a level of audacity and a lack of fear within innovators, but sometimes I wonder if that audacity and lack of fear could frighten general society.

Well, I think there’s a fine line between audacity and madness.

And that depends on what side of the fence you’re on.

Exactly. It also depends upon the cultural times. Because, what Galileo said in his time [that the earth and planets revolve around the sun] was not just audacious, it was threatening.

To the church, absolutely.

In a different time and place [the response to] Galileo would have been: “Well, yeah, that’s right. Let’s move on now”.   [Instead of being tried by the Inquisition, found suspect of heresy, forced to recant and spend the rest of his life under house arrest.]  The sad thing is you may have the most brilliant idea in the world, but you will never go anywhere.

Take a look historically at Charles Babbage.  I think he was a brilliant man who had some wonderful ideas; he was very audacious, and yet he’s a tragic figure because he never really understood how to turn his ideas into reality.  [A mathematician, philosopher, inventor and engineer; Babbage originated the idea of a programmable computer.]  That’s what ‘Capital I’ mean to me.  I think that’s why Steve Jobs was so brilliant; it’s not just that he had cool ideas, but he knew how to turn that into an industry.

We have a golden rule that it really doesn’t matter how cool your tech is if nobody’s using it. And it’s a shame because there are some incredible innovations out there, but so many innovators haven’t learned the Job’s magic of marketing.

KimmiC rule: It doesn’t matter how ‘bright the light’ if no one is using it to read.  

I think that’s especially true of our domain of computing systems, because we are ones who are most comfortable – as a gross generalisation – with controlling our machines.  Being able to connect with humans is a very different skill set. To find people who have the ability to do both is very, very challenging indeed.

Zuckerberg is a brilliant programmer, and he had the sense to surround himself with the right people so that he could make those things [Facebook] manifest.  There are probably dozens upon dozens of Zuckerbergs out there, who had similar ideas at the same time, but they didn’t know how to turn them into reality.

The same thing could be said of Tim Berners-Lee: a brilliant man, a nice man…  He was in the right time at the right place and he knew how to push the technology that he was doing.  He was developing things that were in a vast primordial soup of ideas.

Tim Berners-Lee

HyperCard was out; and why didn’t HyperCard succeed while Tim’s work did?  Part of it is technical, part of it just the will of Apple, and part was his [Tim] being in the right place at the right time.

And HyperCard influenced Tim.  Even Bill Atkinson, creator of HyperCard, said: if only he had come up with the notion of being able to link across [Hyper]card decks, then he would have invented the prototypical web.  But, he didn’t do it, he didn’t think about it.

Do you feel that you are ‘in the right time,  at the right place’?

There are times that I think I was born in the wrong century, but I know that if I had been born in the Middle Ages, at my age, I would be long dead.

So, yes, I can say from a very philosophical basis: I am quite content with the time in which I am now living, because I cannot conceive of any other time in which I could have been successful.

I read a quote on Wikipedia… a story you apparently told:

… I pounded the doors at the local IBM sales office until a salesman took pity on me. After we chatted for a while, he handed me a Fortran [manual]. I’m sure he gave it to me thinking, “I’ll never hear from this kid again.” I returned the following week saying, “This is really cool. I’ve read the whole thing and have written a small program. Where can I find a computer?” The fellow, to my delight, found me programming time on an IBM 1130 on weekends and late-evening hours. That was my first programming experience, and I must thank that anonymous IBM salesman for launching my career.”

It sounds like you were quite fortunate to have bumped into someone who was willing to take a chance with you very early on.

I think that’s fair to say.  Though, if it hadn’t been that person, I imagine the universe would have conspired to find me another person, because I was so driven.   Looking backward upon fifty-some years passed, that was the right time and place.  It may have just happened to be that was the right time and guy. But there would have been others.

Grady Presenting

[But] I haven’t told you about the missteps I had and the people who rejected me; we just talk about the successes.  Historians are the ones who write history. Because it’s the history of the winners, we don’t tend to write about the failures.  But even Edison pointed out… I forget the exact quote, but the reason he succeeded so much is he’s done so much and he’s failed; he’s failed more than others on an absolute basis, but he tried more.

“I have not failed. I’ve just found 10,000 ways that won’t work.” ― Thomas A. Edison

What, in your view, gets in the way of the success of innovation?

I think the main thing is the fear of failure. I run across people like Babbage for example… or this gentleman I was mentoring earlier today, who are so fearful that they’re not doing something absolutely perfect, they are afraid to turn it into a reality. I think some innovators are so enamoured with perfection they are afraid to fail and therefore never do anything.

Within this milieu you seem to have had your fingers in many interesting pies.  One that I think must be especially fascinating is your work with the Computer History Museum.  How did you get involved in that?

In a way they came to me.  My interest has been in software, it always has been.  I forget the circumstances but, some years ago, I connected with John Toole, who was the original CEO of the Computer History Museum when it was in the Bay Area. He showed me around the warehouse that they had set aside at Moffett Airfield.

Not long before that they had shipped a lot of the materials from the old computer museum in Boston out to the Bay Area.  Gordon Moore [co-founder and Chairman Emeritus of Intel] and others had said they wanted to make a museum, and they funded that effort.  So, I was around the edges of it in the early days. I thought it was fascinating.

I think the reason it attracted me in the first place, in general, is that I have an interest in the appreciation of history, not just the history of technology, but just the history of humanity.

As I went to the exhibits I remember making the observation to John that I thought their plans were great, but, projecting out to one or two generations, there wasn’t going to be too much that was interesting to display in the museum because, by then, all of the hardware would have shrunk to a tiny size and we’d need microscopes in the exhibits.

“And so, therefore, John”, I said, “what are you doing about preserving the history of software,” which is a very ephemeral thing.

Think about getting the original source code to the [IBM Operating System] 360, or the original source code to Facebook.  Because these are such ephemeral things, people throw them away.  In fact we, IBM, no longer have the original source code to the first version of OS/360; it’s gone.  There are later versions but not the original one.

Facebook Source Code

When Microsoft decided to stop production on the Microsoft Flight Simulator, I mean, this was a ground-breaking program, I wrote off to Ray Ozzie [Microsoft CTO and CTA from 2005 – 2010] and said: “What are you guys going to do with the software? Can we have it?”   He munched around for a while, but I think it’s lost for all time.

We’re in an interesting period of time and my passion, which has led me to the museum, is to say: Now is to time to preserve software!  We don’t know how to present it, we don’t know what to do with it once we have it, but let’s worry about that in future generations and just capture it now.

It’s very similar to what Hollywood has found with a lot of their film stock. A lot of it was just being lost or destroyed, but there is so much cultural history in those records.

Yes, exactly.  So, prior to being on the board, I set up a workshop at the museum looking at the preservation of classic software.  I wrote to 500 of my ‘closest friends’… people ranging from Marvin Minsky [cognitive scientist in the field of AI] to some other developers I knew, and everybody in between, and asked: “What software would you preserve for future generations?”

We came up with a long list.  I think that very idea inspired Len Shustek, who’s the president of the museum, to invite me on to be on the board of trustees.

What is your favourite exhibit in the museum?

I like the [IBM] 1401 reproduction.  They have a couple of 1401 machines and they’ve gotten them running again.  It’s fun to be in a place where there is something dynamic and alive and runs and you can be in the midst of it.  Just walking into the room, you smell old computers; and that’s a pretty cool kind of smell.  So, is the fact it’s running and clacking away.

The 1401

Fred Brooks [IBM software engineer] and I had an interesting discussion once, in which I lamented the fact that our computers make no noise, because – and I know I sound like an old guy, but – I remember you could hear some of the earlier computers I worked on. They were clattering in one way or another, be it their hard drives or their tapes, and you could get a feel for where the program was just by listening.

You can’t do that now with our machines; they are all very, very quiet. So, the 1401 exhibit has this wonderful visceral immersive display, in which you hear it and smell it as it processes.

I’ve actually seen people get a little misty-eyed just thinking about a dial-up tone, and you certainly seem to have some ‘misty memories’ too.  But, let’s look forward now.  What new things do you think may be exhibited in ten years time.

I think that’s the next interesting challenge.  We know how to display physical things, but there aren’t that many more things like old machines, to collect because they are disappearing.

If you go to the exhibits, you’ll see things get smaller and smaller and there is more of an interest in software.  I think the interesting problem for the museum to attempt is: how do we present software to the general public so that we open the curtain on it and show some of the magic and the mystery therein.  I think software can be very beautiful, but how do I explain that to someone who can’t see software. That’s an interesting challenge.

You’ve got to look at it it like an art form.  Source code, especially some of the well-written stuff, looks physically beautiful; forget about what it actually does.  There are many different dimensions you can look at try to get people’s interest.

[Editors Challenge to artists: here is a piece of code I’ve ‘mucked about with’ 

– why not see what code inspires you to create and send us a picture, which we’ll share with our readers, Grady Booch and the Computer History Museum!]

I think it’s very much like modern art because you can look at a bit of an impressionistic painting and you may not get it. Often the reactions are: “My kid could do that kind of thing.”

Well, not exactly; because the more you learn about it, the more you learn how much that painting – or whatever the art form is –  speaks to you and tells you stories.  It requires a little bit of education.

There is a visceral reaction at first to some art but the more you know about it, the more you can appreciate its subtlety.  I think the same is true of software.  We (the museum) have collected the original source code to Mac Paint, which turns out to be a really beautiful piece of software.

I’m using a phrase here that has meaning to me – beautiful – but requires explanation to the general public to say: why is this a beautiful piece of code, why does it look so well-formed?  I think that’s a responsibility we have as insiders to explain and teach that kind of beauty.

What are your thoughts about the emerging trends in Innovation and technology?

Well, the web has been an amazing multiplier, and yet at the same time it’s also increased the noise.  Therefore, the ability to find the real gems in the midst of all this madness is increasingly challenging.  For example, with the computing project  [COMPUTING: The Human Experience] we’ve done, we crowdsourced some initial seed funding for our work.

We could not have done this in the past without something like the web.  We put this appeal out to the world and it gave us access to people, otherwise we could not have done it.  I think the web has produced an amazing primordial soup of ideas into which we can tap; and that is so game-changing in so many ways.  That’s probably the biggest thing. [You can contribute to and volunteer for the project here.]

The web has changed everything; and those who don’t keep up are doomed to be buggy web producers.

Yes, exactly.  Or companies like Kodak.

I had the opportunity to speak to Kodak’s developers about 15 years ago.  It was a small group of people who were in the computer side of Kodak, and I remember saying to them: “Look guys, the future of Kodak is in your hands… so, what are you going to do about it?”

I Tweeted about it not too long ago with a sort of “I told you so.”  And yet, I don’t know whether or not it was inevitable.  It could be the case that some businesses simply die because they just don’t make sense any more.

And they should die sometimes.  But I think early IBM was a good example of a company that understood what business it was in.  I don’t think Kodak really understood what business it was in, towards the end, and that’s what killed it.

I agree, very much so.

Some web business models are founded on the idea that a company has a right to use and profit from an individuals data and personal information… What are your thoughts on that? Do you think that that’s a business model that’s sustainable? I believe that the general public is wising up to this very quickly and are soon going to expect some recompense from the use of their data.

I think there is a local issue and there is global issue that is even harder to tackle.  In the case of the Facebooks and the Twitters of the world, the reality is when I subscribe to those services, I do have a choice – I can chose whether or not to use them.  And, by the very fact that I’m using those services means I am giving up something in the process.

So, why should I be outraged if those companies are using my data, because I’m getting those services for free.  It seems like a reasonable exchange here, and I, as an adult, have the responsibility of choice.  Where it becomes nasty is when I no longer have choice; when that choice is taken away from me.  That’s when it becomes outrageous: when my data is being used beyond my control [in a way] that I did not expect.

I think that will sort itself over time; capitalism has a wonderful way of sorting things.  It’s also the case that we have a generation behind the three of us who are growing up, if not born, digital.  They have a very different sense of privacy, so, I’m not so concerned about it. We have lots of ‘heat and smoke’ but it will resolve itself.

What I find curious is that the ‘heat and smoke’ and discussions are hardly any different from what was initially said about telephones or, for that matter, the printing of the book.  Look at some histories of how phones were brought into the marketplace and you’ll find almost identical arguments to those that are going on today.

I trust the human spirit and the way capitalism works to find a way.  What’s more challenging is the larger issue, and that is the reality that there are connections that can be made in the presence of this data that are simply beyond anybody’s control.

I may choose to share some information on a social media source, or I may use a credit card or whatever, but the very act of participating in the modern society leaves behind a trail of digital detritus.  And I can’t stop that unless I choose to stop participating in the modern world.

I think this is a case where we’ll have politicians do some profoundly stupid things, and we’ll see lots of interesting cases around it.  But, we’ll get used to it.  I mean, people didn’t like the idea of putting their money in a bank for God’s sake, and we got used to it; I think the same thing will happen.

You brought up the Millennials – the digitised generation. What insights would you give them in being game-changers?”

Does any young adult ever want the advice of their elders?

I didn’t ask if they wanted it… 🙂

You know… I think, we laugh about it, but the reality is – and I think Jobs said it well: “Death is a wonderful invention because it allows us to get out of the way and let the next generation find their own way.”  I’m comforted by that; I find great peace in that notion.  They need to have the opportunity to fail and find their own way.  If I were born a Millennial, I’d be growing up in an environment that’s vastly different than mine.

Though, in the end, we are all born, we all die, and we all live a human experience in various ways, there are common threads there… the stories are the same for all of us.  I think those are the kinds of things that are passed on from generation to generation, but everything else is details.

I would not be surprised if the structuring of their brain is different to ours.  I’ve been talking to guys that are 10 – 15 years younger than me, and the ability to hold their train of thought over weeks or months – when you’re doing some serious development or research – they seem to find that extremely difficult.  So, I wonder if we’ll see any really big innovations coming through from those generations.

You could claim that it’s not just the web that’s done that, but it’s back to Sesame Street and the notion of bright, shiny objects that are in and out of our view in a very short time frame.  Certainly I think a case can be made that our brains are changing; we are co-evolving with computing – we truly are.

But, at the same time, throw me in the woods and I couldn’t find my way out of it easily; I can’t track myself well, I can’t tell you what things are good to eat and what things aren’t.  Those are survival skill that someone would have needed to have had a century or two ago.  So, my brain has changed in that regard, just as the Millennials’ brains are changing. Is it a good thing? Is it a bad thing? I’m not at a point to judge it, but it is a thing.

End of Part Two.  Part Three will be published next week – sign up for the blog and it will be delivered directly to your inbox!

You can learn more about Grady via the COMPUTING: The Human Experience website, Grady’s blog and his Twitter feed. Be sure to keep your eye on the COMPUTING: The Human Experience YouTube channel where Grady’s lecture series will be posted.

[Kim, Michael and Grady Skyped from their homes in Sydney and Hawaii.]

Antics with Semantics: The Innovation Interview with Semantics Pioneer, Ora Lassila

Wanting to speak to someone, both interesting and inspiring, about the Semantic Web and Innovation, Ora Lassila, an Advisory Board Member of the World Wide Web Consortium (W3C) as well as Senior Architect and Technology Strategist for Nokia‘s Location and Commerce Unit, was the obvious ‘go to guy’.

A large part of Ora’s career has been focussed on  the Semantic Web as it applies to mobile and ubiquitous computing at the Nokia Research Center (NRC), where he, among many things, authored ‘Wilbur’, the NRC’s Semantic Web toolkit.   As impressive as that is, as I did my research, finding out more about Ora, the more fascinating he, and his career, became to me.

Ora is one of the originators of the Semantic Web, having been working within the domain since 1996.  He is the co-author (with Tim Berners-Lee and James Hendler) of the, to date, most cited paper in the field, ‘The Semantic Web’.  Ora even worked on the knowledge representation system ‘SCAM’,  which, in 1999, flew on a NASA Deep Space 1 probe.

Leading up to our attendance and presentation at the Berlin Semantic Tech and Business Conference, Michael– the true ‘tech head’ of KimmiC – and I were extremely pleased that Ora, ‘the Mac Daddy’ of the Semantic Web, gave us so much of his time.   I hope you find our conversation with him as interesting as we did!

[I’ve italicised Michael’s questions to Ora so you are able to differentiate between us – though, I think it will become obvious as you read – lol!]

Ora Lassila (photo credit: Grace Lassila)

Ora Lassila: Capital I Interview Series – Number 13

Lets start out by talking about Innovation in general, and we’ll move on to the Semantic Web as we go along.   As this is the Innovation Interview Series, the ‘baseline’ question is always: how do you define Innovation?

Good question.  I think many people do not make a clear distinction between ‘innovation’ and ‘invention’.

To me, ‘innovation’ is something that not only includes some new idea or ideas, but also encompasses the deployment and adoption of such.  You can invent clever new things, but if you don’t figure out how to get people to use those new things, you have fallen short of the mark.

How essential has innovation been in your career to date; and how important do you envisage it being, going forward?

It has been important.  A big part of my professional career was spent in a corporate research lab, where inventing new things was less of a challenge than getting these inventions ‘transferred’ to those parts of the corporation that had more capability in promoting their adoption and deployment.

That said, I have learned that ‘technology transfer’ is not always about taking concrete pieces of technology, software for example, and handing them over to someone else for productization.  Sometimes the transfer is more ‘insidious’ and involves influencing how people in your organisation – or outside your organisation – think and see the world.

I would claim that some of my early work on the Semantic Web absolutely fits this definition.  So writing, publishing and talking all constitute viable means.  Also, we should not forget that people need to be inspired.  You cannot just tell them what to do, instead, they have to want to do it.

What do you think are the main barriers to the success of innovation?

I am not kidding when I say that the absolute biggest obstacle is communication.  That is, we should learn to communicate our ideas better to be able to convince people and to inspire them.  I have much to learn in this area.

Who and what inspires you? Where do you look for inspiration?

I have no good or definite answer for that.  When I was younger I was really inspired by the Spanish aviation pioneer Juan de la Cierva whose simple yet radical idea about aircraft – the ‘autogiro’ – paved the way for the adoption of helicopters.  And yet, one might argue that, in many ways helicopters are a far more complicated and complex technology than de la Cierva’s original invention.

Juan de la Cierva y Codorníu, 1st Count of De La Cierva

I am inspired by simplicity… I strive to create and design things that are simple, or at least not any more complicated than necessary.

What are, in your view, the current emerging critical trends in Innovation and technology?

I like openness, things like open-source software as well as Open Access and sharing of data as part of the scientific process.  I am hoping we see a fundamental change in how research is done.  In many ways we have progressed to a point where many problems are so complex that they are beyond a single researcher’s or research group’s capacity and capability to tackle.

Also, on the topic of openness, I like some of the recent developments in open government, e-Government, and such.

And what are some of the coolest mobile technologies you’re seeing launched? 

I am much enamoured with the idea that mobile technologies – particularly via the use of GPS, etc. – ‘ground’ many services to the physical world.  There are many uses for location information, uses that help me in my everyday life.

Furthermore, by making the mobile device better understand the current ‘context’, not only geographically but also by making use of other observations about the physical world (movement, sound, etc.), we can make applications and services better for users.

Do you think we will have a ‘meshed up’ world that effectively bypasses the stranglehold telcos have on infrastructure?

I don’t necessarily agree that the telcos have a ‘stranglehold’.   They provide an important service and a critical investment in an infrastructure I don’t really see us living without.

But we need things like ‘net neutrality’ to make sure that this infrastructure really serves people in an open and non-discriminatory way.  in this regard I am also concerned about more recent legislative attempts [SOPA, PIPA, ACTA] that (perhaps unintentionally) will hurt the overall technical function of the Internet.

It seems that current Web based business models are founded on the idea that businesses have the right to record everything about users/consumers and profit from this information.  Do you think this is a sustainable business model, or do you think the user/consumer will start to think that they, and their data, is worth something and begin to demand recompense of some sort?

There are very few fundamentally different, viable, business models on the Web, so I can see that businesses would want to cash in on user data.  It is only a matter of time before the consumers ‘wise up’ and understand the value of their own data.  Personally I think we should aim at ‘business arrangements’ where all parties benefit.  This includes concrete benefits to the user, perhaps in a way where the user is a bona fide business partner rather than just someone we collect data about.

It is important to understand that what’s at stake here is not only how some user data could be monetized, it is also about users’ privacy.  Luckily I work for an organisation [Nokia] that takes consumer privacy very seriously.

You’ve got a fascinating history, and seem to have gotten into the Semantic Web at the very beginning.

The very, very beginning, yes.  I think I can argue that I’ve been doing this longer than the term has actually existed.

In ’96 I went to work at MIT…  I’d just been hired by Nokia, and they wanted to send somebody to MIT as a kind of visiting faculty member.   So, I worked in Tim Berners-Lee’s team, and one day he asked me what I thought was wrong with the web.

Tim Berners-Lee

Just a small question.

Yeah, not intimidating at all.

I said: “My hope has been to be able to build,” – what then would have been called agents, autonomous agents – and I said: “I can’t really do that because the web was built for humans and human consumption.  I would really, really like to see a web that was more amenable for consumption by automated systems.”

And he [Berners-Lee] said: “Yeah, that’s it! Now, how do we fix that?”

And I went: “Well, how about we try knowledge representation and apply that to web technologies.”  Because knowledge representation is a branch of artificial intelligence that has a long history of taking information and representing it in such a way that you can reason about it then draw conclusions from it… things like that.  We agreed that I would look into that, and that’s really how I got into all this.

Of course I had worked on various projects before that, that involved ontologies and knowledge representation, it just wasn’t done on the web.   The big reason being that the web had not really been invented yet.

There was Cyc and some other AI [Artificial Intelligence] things before that… 

Cyc is a very good example of an attempt to build a very large ontology that would encompass common sense knowledge.  But there are many examples of systems that used ontologies in one way or another for narrower domains.  Cyc was an overly ambitious project, in the sense that they really wanted to cover a lot of ground in terms of human knowledge.

I had worked on several projects in the past that applied ontologies to things like planning industrial production, or planning logistics.  So, the question really was, could you build a model of the world that was rich enough and precise enough that a system could use that knowledge to create plans for various things.  In my case those were plans for either how to run industrial production, or large fleets of logistics’ resources.

You were a long, long way in front of everybody else… at least ten years.  It’s incredible!

One might argue too far ahead.

I think at that time most people were just trying to come to grips with basic HTTP and web servers.  If you look at the vested interests, especially of software providers at that time… I guess it wasn’t really the right timing. But I think that time is coming now.

Yeah, I think we’re in a better position now and we’ve certainly seen a lot of adoption of Semantic Web technologies in the last few years.

I think elements of semantic are brilliant.   RDF, for example, is one of the smartest ways I’ve ever seen of describing something.  You can’t break the way semantics talks about something, whereas you can break the interpretation easily in XML.

I start to lose traction with it when it gets towards ontologies.  Do you think that ‘splitting the message’ would help with adoption?  For instance, you can use ontologies, but there is also a part of semantics which is brilliant for just doing ‘business as usual’?

I think there is a fairly broad spectrum of possible ways of making use of this technology.  I’m sure you’ve seen diagrams of the so called layer cake, with the different technologies layered on top of one another.

A Semantic Web Stack (layer cake) [image created by Tim Berners-Lee

I think that it’s up to you to decide how far up that layered structure you want to go.  There are a lot of applications where very simple use of just some of the most basic technologies will give you a lot of benefit.  And then there are other problems where you may actually want to separate a lot of the understanding of your domain from your actual executing code…  for those kinds of things, encapsulating that knowledge in the form of a potentially very complex ontology may be a good way to go.

My issue with ontologies is exactly the same issue I have with the current enterprise software providers… If you talk about mass adoption, as opposed to just specific domain adoption, for every extra entity – be it a class or data table – you decrease your adoption exponentially.   And, once you go up to higher levels, you shouldn’t assume you’re the only person that has a valid way of looking at the world, though you may be using the same data.  I think we’re saying the same thing…

Absolutely.  The interesting thing to say about the current enterprise software providers, I think, is that they have one model of the way to look at the world.   There are cases where companies have had to change the way they do business in order to adopt the enterprise software [currently available].

You have two choices: you either do it their way or else you spend a few million bucks and you do it their way anyhow.

I think that there is a possibility, with these Semantic Web technologies, of getting into more flexible uses of information and I kind of like that idea.

Over the last few years I’ve become increasingly interested in our ability to share information.  When you start talking about sharing it becomes really dangerous to have very complex, strictly defined semantics.  Because, like you said, other people might have a different interpretation of things.

But you want to nail some things down.  Understanding something about [the] information would give you a baseline for interoperating.  And then, you could do ‘better’ interoperation if you had a better definition of the meaning of the information.

I agree with you about understanding information.  But I think where most things fall to pieces – and this is also looking at business model languages and stuff – as soon as you get anywhere near processes with that information, it goes to hell pretty quickly. 

Exactly.  I spent a few years, at the beginning of the previous decade, working on a large Semantic Web research program funded by DARPA [Defense Advanced Research Projects Agency].  I was part of an effort to see if we could use ontological technologies to model web services.

Is that DAML and stuff like that?

Exactly; DAML, and DAML-S for services.  We very quickly got into process modeling; and those kinds of things get very difficult…

Very quickly.

Absolutely.  I think that’s the thing that still needs work.

The traditional approach to anything process-oriented just doesn’t work unless you have very tight coupling and a very controlled domain.  But I think there are a lot of different ways of trying to solve the same problem without having to get to that level.

I think that one of the things that is missing from the whole Semantic Web collection of specifications is this notion of action… a notion of behaviour.  It’s hard to model, but I think that we ought to work on that some more.

We [KimmiC/FlatWorld] have taken a more hybrid approach, so we use things like REST architecture, and a lot of stuff from the business world, in terms of authentication and authorisation. 

Sure.  I’m not in any way advocating the use of the WS_* collection of technologies. I’m not a big fan of those.

I’ve looked at all the SOAP stuff and there are a lot of problems… like business process deployment.  It is a nightmare to deploy these technologies.  It’s even more of a nightmare to load balance them.

Right.

Essentially, if you’re looking for dynamic relationships – be it in business or whatever – they’re just useless for that sort of thing.  They’re always designed around having control of a large domain space; this is especially true when it comes to deployment of applications.  I just think they’ve missed the point. 

I think the web is the best example of a redundant, massively-distributed application; and we need to look at it more as, “That’s the model,” and we have to work with it.

Absolutely.  I think that for 20 years there have been discussions about these sorts of ad hoc enterprises, or collections of smaller companies, being able to very quickly orchestrate themselves around a particular mission [purpose].  But I think that these technologies, just like you said, are probably not the right answer.

When you wrote your 2009 position paper you noted that rather than languages, the  biggest issues or problems facing the uptake of the Semantic Web were 1. Selling the idea; and 2.  A decent user interface.

Why did you feel that was the case then; and, has your opinion changed regarding these issues in the two+ years since you wrote your paper? 

Semantic Web technologies are well suited to situations where you cannot necessarily anticipate everything – say, about the conditions and context in which an application is used, or which kind of data an application might have available to it.  It is like saying that this is a technology for problems we are yet to articulate.  Sounds like a joke, but it isn’t, and the problem in ‘selling’ Semantic Web technologies is often about the fact that once a problem has been clearly articulated, there are many possible technologies that can be used to solve it.

The issue I have with user interfaces and the user experience is the following: Semantic Web technologies – or more generally, ‘ontological’ technologies – give us a way to represent information in a very expressive manner… that is, we can have rich models and representations of the world.  I feel that user interface technology has a hard time matching this expressiveness.  This issue is related to what I said earlier about not being able to anticipate all future situations; writing software that can handle unanticipated situations is hard.

All that said, I don’t like the term ‘Semantic Web applications’.  Users shouldn’t have to care, or need to know, that Semantic Web technologies were used.  These are just useful things in our toolbox when developing applications and services.

What are the key challenges that have to be solved to bring those two problems together?

I am really looking for new programming models and ways to add flexibility.  This is not only a technical problem, we also need to change how people think about software and application development.  I have no silver bullets here.

How do you see applications developing in the next few years – compared to the current environment – as you have mention we have to shift our minds from an application that ‘owns and controls’ it’s own data rather than simply interacting with data?

I think, again, this is about changing how people think about application development.  And, more specifically, I would like to see a shift towards data that carries with it some definition of its semantics.

This was one of the key ideas of the Semantic Web, that you could take some data, and if you did not understand it, there would be ‘clues’ in the data itself as to where to go to find what that data means.

As I see it, the semantics of some piece of data either come from the relationship this data has with other data – including some declarative, ‘machine-interpretable’ definition of this data, for example, an ontology – or are ‘hard-wired’ in the software that processes the data.  In my mind, the less we have the latter, and the more we have the former, the better.

In previous interviews you’ve noted that you feel users should have a say “in how they view  information.”  Do you think that users should become involved in making the semantic web more ‘usable’? And if so, how?

I think users should demand more.  There needs to be a clear ‘market need’ for more flexible ways of interacting with information.  User experience is a challenge.

On this topic, I also want to point out how unhappy I am with the modern notion of an ‘app’.  Many apps I have seen tend to merely encapsulate information that would be much better offered through the Web, allowing inter-linking of different content, etc. It kind of goes with what I said earlier about openness…

There’s a lot of guys saying they can plug two systems together easily, but it almost always means at the data level.   It doesn’t really work once you start applying context on top of it.

I’d like to see a middle ground where we have partial interoperability between systems, because that’s how humans interact.

That’s something we’re looking at as well.  I view it like this: when I go through Europe, I can speak a little bit of German, a little bit of French. I’m not very good, but I have to have a minimal level of semantic understanding to get what I want: to get a beer.  I don’t have to understand the language completely, just enough, in context, to act on it.

Speaking of acting on things… Ora, where are you going with semantics in the future?

That’s a good question. Right now I’m working on some problems of big data analytics.

With semantics?

Nokia is investing in large-scale analytics, so I’m in the middle of that right now.

I’m currently looking at how to tackle the problem of how to bootstrap behaviour.  Behaviour and notions of action are not well-tackled in the space of the Semantic Web, and I’d really like to get into bringing two information systems in contact with one another, and have them figure out how to interoperate.

That’s very ambitious.

Right.  And I’m not entirely sure if people understand that that’s an important question to tackle.

Oh, it’s an important question to tackle; it’s just more a question of… Again, you’re very far ahead of the game.

Well, I think that today, if you want to make systems A and B interoperate, it’s usually a large engineering undertaking.  So, it’s directly related to the question of separating information from applications…  you could pick the applications you like and take the information that you’re interested in and make something happen.  In terms of interoperating systems, right now we have a situation where we either have full interoperability, or we have nothing… we have no middle ground.

You can learn more about Ora via his website, blog and  Twitter feed.

[Kim, Michael and Ora Skyped from their homes in Boston and Sydney.]

[This interview has been translated into the Serbo-Croatian language by Jovana Milutinovich of Webhostinggeeks.com]