Tag Archives: software

Zen and the Art of Software: The Innovation Interview with Grady Booch (Part 3)

In parts one and two of our chat with  software star Grady Booch, we discussed his magnum opus project  COMPUTING: The Human Experience, Innovation, the Computer History Museum and the possible changing brain structure of Millennials, among many other things.

In this, the final segment of our discussion with him, we look at software – and software architecture – in general, Grady’s relationship with it in particular, the troubles facing Google and Facebook, the web, and his views on the SOPA, PIPA and ACTA bills.

To view the full introduction to this multi-part interview with Grady: Click here

Grady Booch: Capital I Interview Series – Number 14

[This was a joint conversation between Grady, Michael and myself. I’ve italicised Michael’s questions to Ora so you are able to differentiate between us – though, I think it will become obvious as you read – lol!]

Grady, You are credited with the building, writing and architecting of so much technology;  of all of those things, what is it that you are most proud of?

There are three things.  The first one is very personal.  My godson – he would have been eight or nine at the time –  was given a task at his school to write about a hero, and he wrote about me.  That was pretty cool!  Everything else is details, but I’m really proud of that one.

On a technical basis, I’m pleased with the creation of the UML, not just because of the thing we created, but the whole idea and industry around it that being able to visualise and reason about software-intensive systems in this way is a good thing. So, I think we lifted the tide for a lot of folks.

UML Diagrams

I contributed to the notion of architecture and looking at it from multiple views, and how to represent it.  I feel good about the whole thing around modelling and architecture and abstraction.  I think I helped people and I feel good about that.

UML was certainly a game changer.  I remember when it came in, before you got bought up by IBM.  It was like a wave going across the globe.  It made a profound difference.

And it’s different now because it’s part of the oxygen.  Not everybody is using it, that’s okay – not everybody is using C++ or Java and that’s fine – but I think we changed the way people think.

Our estimates are that UML has a penetration of somewhere around 15 to 20 percent of the marketplace.  That’s a nice number.  We’ve changed the way people build things.

Absolutely, especially at the big end of the market.

Yeah.  I wrote an article in my architecture column, that tells the story of when I was dealing with my aneurysm.  I was laying in a CT scan machine in the Mayo Clinic, looking up and saying: “My gosh, I know the people who wrote the software for this, and they’ve used my methods.”  That’s a very humbling concept.

It’s a pretty much a pretty good Acid test, isn’t it.

Yes, it is.

And your work is continuing in architecture…

Correct. I continue on with the handbook of software architecture, and a lot of what I do, in both the research side and with customers, is to help them in the transformation of their architectures.

For IBM the last nine months or so I’ve been working with the Watson team – the Jeopardy playing game – and helping the teams that are commercialising their technology.

How do you take this two-million line code base, built by 25 men and women, drop it in another domain and give it to a set of people who know nothing about that system.  This is exactly the kind of stuff that I do for customers, and I’ve been helping IBM in that regard.

That would be very challenging.  You’d need somebody with your brain power to actually manage that, I imagine.

Well, it’s not an issue of brain power, it’s an issue of: how does one look at systems like this and reason about them in a meaningful way.  And after the UML comes in – because it allows us to visualise it and the whole notion of architecture as used from multiple dimensions – all these things come together.  That make a two million line code base understandable to the point where I can know where the load-bearing walls are and I can manipulate them.

That is pretty impressive!  You’ve found a way of managing the slicing and dicing of the codebase.

That’s a problem that every organisation faces.  I have an article that talks about the challenges Facebook is going to have.  Because they…. every software-intensive system is a legacy system.  The moment I write a line of code, it becomes part of my legacy…

Especially if you’re successful upfront and gets massive growth, like they did.

Yes, and having large piles of money in your revenue stream often masks the ills of your development process.

Absolutely.

Google’s faced that, Facebook is facing that.  They have about a million lines of [the programming language] PHP that drives the core Facebook.com system – which is really not a lot of code – still built on top of MySQL, and it’s grown and grown over time.

I think, as they split to develop across the coast – because they’re opening up a big office in New York City – that very activity changes the game.  No longer will all of the developers fit within one building, so the social dynamics change.

Inside Facebook's Madison Avenue Offices

Ultimately, what fascinates me about the whole architecture side of it is that it is a problem that lies on the cusp of technology and society.  It’s a technical problem on the one hand – so there are days I’ll show up as an uber geek – and on the other hand, it’s a problem that’s intensely social in nature, so I’ll show up as a ‘Doctor Phil’.

To follow-up on one of Kim’s questions: if you look at the backlog of IT, I think every company of moderate size is still struggling to deliver on business demands. Do you think that architecture helps or, does it actually contributes to the problem?

Architecture can help in two ways.

I’ll give you one good example.  There is a company called OOCL (Orient Overseas Container Line) that I worked with some years ago to help them devise an architecture for their system that tracks containers and all these kind of things.  Their CEO had this brilliant notion: what would happen if we were to take this system and extract all of the domain-specific bits of it and then sell that platform?

By having a focused-upon architecture, they were able to devise a platform – this is a decade before Salesforce.com and these kind of things – and they could then go into a completely new business and dominate that side of the marketplace.   Here is an example where a focused-upon architecture has a real, material, strategic business implication.

The other thing focused-upon architecture offers is, it  is allows you to capture the institutional memory of a piece of software.  The code is the truth, but the code is not the whole truth.  So, in so far as we can retain the tribal memory of why things are the way they are, it helps you preserve the investment you made in building that software in the first place.

What sort of size company are you talking about?  It sounds like the telco space… large Tier 1 and  Tier 2 companies. 

It could be anybody that wants to dominate a particular business.  Salesforce.com built a platform in that space.  Look at Autostar as another example.  Autostar was an attempt by BMW, and others, to define a common architectural platform, hardware and software, for in-car electronics.  By virtue of having that focused-upon architecture, all of a sudden you have unified the marketplace and made the marketplace bigger, because now it’s a platform against which others can plug and play.

There is a similar effort  with MARSSA, which is an attempt to develop a common architectural platform for electronics for boats and yachts.  Again, it eliminates the competition of the marketplace by having a set of standards against which, people can play well together.  In the end, you’ve made the marketplace bigger because it’s now more stable.

I agree. Also, an architectural approach separates the data from an application specific way of looking at things.

It used to be the case that we’d have fierce discussions about operating systems.  Operating systems are part of the plumbing; I don’t care about them that much anymore.  But, what I do care about is the level of plumbing above that.

My observations of what’s happening is that you see domain-specific architectures popping up that provide islands against which people can build things.  Amazon is a good example of such a platform.  Facebook could become that, if they figure out how to do it right – but they haven’t gotten there yet.  I think that’s one of the weaknesses and blind spots Facebook has.

I also think that they are, to a certain extent, a first generation.  I think the web, in terms of connectivity, is not being utilised to its fullest potential.  I don’t see any reason why, for example, any form of smart device shouldn’t be viewed as being a data source that should be able to plug in to these architectures.

Exactly!

Would that be an example of a collaborative development environment?

Well, that’s a different beast altogether.

With regards to collaborative development environments, what led me to interest in that space is emphasising the social side of architecture.  Alan Brown [IBM engineer and rational expert] and I wrote a paper on collaborative environments  almost ten years ago, so it was kind of ahead of its time.

Alan Brown

The reason my thinking was in that space was extrapolating the problem of large-scale software development, as we’re becoming more and more distributed, to just how does one attend to the social problems therein.  If I can’t fit everybody in the same room, which is ideal, then what are the kinds of things that I do that can help me build systems.

I’ve observed two things that are fundamental to crack to make this successful.  The first is the notion of trust: in so far as I can work well with someone, it’s because I trust them.  You, Kim, trust your husband Michael, and therefore there is this unspoken language between the two of you that allows you to do things that no other two people can do together.

Now, move that up to a development team, where you work and labour together in a room, where you understand one another well.  The problem comes – like with Facebook, and what we’ve done in outsourcing – when you break apart your teams (for financial reasons) across the country or across the world.  Then, all of a sudden, you’ve changed the normal mechanisms we have for building trust.   Then the question on the table is: what can one do to provide mechanisms to provide building of trust?  That’s what drives a lot of ideas in collaborative development environments.

The other thing is the importance of serendipity – the opportunity to connect with people in ways that are unanticipated, this option of ‘just trying things out’.  You need to have that ability too.  The way we split teams across the world doesn’t encourage either trust or serendipity.  So, a lot of ideas regarding collaborative environments were simply: “What can we do to inject those two very human elements into this scheme?”

As we have been talking about trust, I’m curious as to your opinion on the SOPA, PIPA and ACTA bills.

I’ve Tweeted about it, and I’m pretty clear that I think those bills are so ill-structured as to be dangerous.

I get the concept, I understand the issues of privacy and the like, and I think something needs to be done here.  But I’m disturbed by both the process that got us there and the results.  Disturbed by the process in the sense that the people who created the bills seemed to actively ignore advice from the technical community, and were more interested in hearing the voices of those whose financial interest would be protected by such a bill.

The analogy I make would be as if all of a sudden you make roads illegal because people do illegal things in their cars.  It’s stupid the way the process that led up to this bill was set, I think, because it was very, very political.  From a technical perspective, while I respect what needs to be done here, the actual details of it are so wrong – they lead you to do things to the web that are very, very destructive indeed.  That’s why I’m strongly, strongly opposed to it. And I have to say that this is my personal opinion, not that of IBM, etc.

This is the final segment of our multi-part interview with Grady Booch. Part One can be read here, and Part Two can be read here

You can learn more about Grady via the COMPUTING: The Human Experience website, Grady’s blog and his Twitter feed. Be sure to keep your eye on the COMPUTING: The Human Experience YouTube channel where Grady’s lecture series will be posted.

[Kim, Michael and Grady Skyped from their homes in Sydney and Hawaii.]

Zen and the Art of Software: The Innovation Interview with Grady Booch (Part 2)

In the pantheon of world-famous computer scientist’s, Grady Booch is the star who co-authored the Unified Modeling Language (UML) and co-developed object-oriented programming (OOP). He is a Fellow of  IBM, ACM, the IEEE, the author of six books, hundreds of articles and papers and the Booch Method of Software engineering. Grady serves on several boards, including that of the Computer History Museum and is the author, narrator and co-creator of what could be seen as a historical magnum opus of the technological world, COMPUTING: The Human Experience. 

To view the full introduction to this multi-part interview with Grady, and Part 1 of the series: Click here

Grady Booch: Capital I Interview Series – Number 14

[This was a joint conversation between Grady, Michael and myself. I’ve italicised Michael’s questions to Ora so you are able to differentiate between us – though, I think it will become obvious as you read – lol!]

Grady, lets begin with the very basics. As this is the Innovation Interview Series, let’s start with: how do you define innovation?

Ecclesiastes 1:9 has this great phrase:

What has been will be again.  What has been done before will be done again.  There is nothing new under the Sun“.

The way I take it is that innovation – really deep innovation – is about warming the Earth with the Sun of your own making. And to that end, that’s how I distinguish the ‘small i’ from the ‘Big I’.

The ‘small i’ therefore means: I may have a brilliant idea and it warms me, but the ‘Big I’ Innovation is where I can start warming others.  There are new suns possible; there are new ways of warming the Earth… And I think innovation is about doing so.

One of my heroes is the physicist Richard Feynman. If you read any of his stuff or watch his physics lectures – which are just absolutely incredible [Ed. Note: As is his series: The Pleasure of Finding Things Out] – there are some conclusions you can draw (and there is a delightful article someone wrote about the nine things they learned from Feynman.  The way I frame it is to say that I admire him and his innovation because he was intensely curious but at the same time he was bold, he was not fearful of going down a path that interested him. At the same time (too) he was also very child-like and very, very playful.  In the end what really inspires me from Feynman’s work is he was never afraid to fail, but much like Joseph Campbell observes, he followed his bliss.

Richard Feynman

I think that many innovators are often isolated because we’re the ones who are following our bliss; we really don’t care if others have that same bliss.  We are so consumed by that, that we follow it where it leads us, and we do so in a very innocent, playful way… We are not afraid to fail.

I’ve noticed that there is often a level of audacity and a lack of fear within innovators, but sometimes I wonder if that audacity and lack of fear could frighten general society.

Well, I think there’s a fine line between audacity and madness.

And that depends on what side of the fence you’re on.

Exactly. It also depends upon the cultural times. Because, what Galileo said in his time [that the earth and planets revolve around the sun] was not just audacious, it was threatening.

To the church, absolutely.

In a different time and place [the response to] Galileo would have been: “Well, yeah, that’s right. Let’s move on now”.   [Instead of being tried by the Inquisition, found suspect of heresy, forced to recant and spend the rest of his life under house arrest.]  The sad thing is you may have the most brilliant idea in the world, but you will never go anywhere.

Take a look historically at Charles Babbage.  I think he was a brilliant man who had some wonderful ideas; he was very audacious, and yet he’s a tragic figure because he never really understood how to turn his ideas into reality.  [A mathematician, philosopher, inventor and engineer; Babbage originated the idea of a programmable computer.]  That’s what ‘Capital I’ mean to me.  I think that’s why Steve Jobs was so brilliant; it’s not just that he had cool ideas, but he knew how to turn that into an industry.

We have a golden rule that it really doesn’t matter how cool your tech is if nobody’s using it. And it’s a shame because there are some incredible innovations out there, but so many innovators haven’t learned the Job’s magic of marketing.

KimmiC rule: It doesn’t matter how ‘bright the light’ if no one is using it to read.  

I think that’s especially true of our domain of computing systems, because we are ones who are most comfortable – as a gross generalisation – with controlling our machines.  Being able to connect with humans is a very different skill set. To find people who have the ability to do both is very, very challenging indeed.

Zuckerberg is a brilliant programmer, and he had the sense to surround himself with the right people so that he could make those things [Facebook] manifest.  There are probably dozens upon dozens of Zuckerbergs out there, who had similar ideas at the same time, but they didn’t know how to turn them into reality.

The same thing could be said of Tim Berners-Lee: a brilliant man, a nice man…  He was in the right time at the right place and he knew how to push the technology that he was doing.  He was developing things that were in a vast primordial soup of ideas.

Tim Berners-Lee

HyperCard was out; and why didn’t HyperCard succeed while Tim’s work did?  Part of it is technical, part of it just the will of Apple, and part was his [Tim] being in the right place at the right time.

And HyperCard influenced Tim.  Even Bill Atkinson, creator of HyperCard, said: if only he had come up with the notion of being able to link across [Hyper]card decks, then he would have invented the prototypical web.  But, he didn’t do it, he didn’t think about it.

Do you feel that you are ‘in the right time,  at the right place’?

There are times that I think I was born in the wrong century, but I know that if I had been born in the Middle Ages, at my age, I would be long dead.

So, yes, I can say from a very philosophical basis: I am quite content with the time in which I am now living, because I cannot conceive of any other time in which I could have been successful.

I read a quote on Wikipedia… a story you apparently told:

… I pounded the doors at the local IBM sales office until a salesman took pity on me. After we chatted for a while, he handed me a Fortran [manual]. I’m sure he gave it to me thinking, “I’ll never hear from this kid again.” I returned the following week saying, “This is really cool. I’ve read the whole thing and have written a small program. Where can I find a computer?” The fellow, to my delight, found me programming time on an IBM 1130 on weekends and late-evening hours. That was my first programming experience, and I must thank that anonymous IBM salesman for launching my career.”

It sounds like you were quite fortunate to have bumped into someone who was willing to take a chance with you very early on.

I think that’s fair to say.  Though, if it hadn’t been that person, I imagine the universe would have conspired to find me another person, because I was so driven.   Looking backward upon fifty-some years passed, that was the right time and place.  It may have just happened to be that was the right time and guy. But there would have been others.

Grady Presenting

[But] I haven’t told you about the missteps I had and the people who rejected me; we just talk about the successes.  Historians are the ones who write history. Because it’s the history of the winners, we don’t tend to write about the failures.  But even Edison pointed out… I forget the exact quote, but the reason he succeeded so much is he’s done so much and he’s failed; he’s failed more than others on an absolute basis, but he tried more.

“I have not failed. I’ve just found 10,000 ways that won’t work.” ― Thomas A. Edison

What, in your view, gets in the way of the success of innovation?

I think the main thing is the fear of failure. I run across people like Babbage for example… or this gentleman I was mentoring earlier today, who are so fearful that they’re not doing something absolutely perfect, they are afraid to turn it into a reality. I think some innovators are so enamoured with perfection they are afraid to fail and therefore never do anything.

Within this milieu you seem to have had your fingers in many interesting pies.  One that I think must be especially fascinating is your work with the Computer History Museum.  How did you get involved in that?

In a way they came to me.  My interest has been in software, it always has been.  I forget the circumstances but, some years ago, I connected with John Toole, who was the original CEO of the Computer History Museum when it was in the Bay Area. He showed me around the warehouse that they had set aside at Moffett Airfield.

Not long before that they had shipped a lot of the materials from the old computer museum in Boston out to the Bay Area.  Gordon Moore [co-founder and Chairman Emeritus of Intel] and others had said they wanted to make a museum, and they funded that effort.  So, I was around the edges of it in the early days. I thought it was fascinating.

I think the reason it attracted me in the first place, in general, is that I have an interest in the appreciation of history, not just the history of technology, but just the history of humanity.

As I went to the exhibits I remember making the observation to John that I thought their plans were great, but, projecting out to one or two generations, there wasn’t going to be too much that was interesting to display in the museum because, by then, all of the hardware would have shrunk to a tiny size and we’d need microscopes in the exhibits.

“And so, therefore, John”, I said, “what are you doing about preserving the history of software,” which is a very ephemeral thing.

Think about getting the original source code to the [IBM Operating System] 360, or the original source code to Facebook.  Because these are such ephemeral things, people throw them away.  In fact we, IBM, no longer have the original source code to the first version of OS/360; it’s gone.  There are later versions but not the original one.

Facebook Source Code

When Microsoft decided to stop production on the Microsoft Flight Simulator, I mean, this was a ground-breaking program, I wrote off to Ray Ozzie [Microsoft CTO and CTA from 2005 – 2010] and said: “What are you guys going to do with the software? Can we have it?”   He munched around for a while, but I think it’s lost for all time.

We’re in an interesting period of time and my passion, which has led me to the museum, is to say: Now is to time to preserve software!  We don’t know how to present it, we don’t know what to do with it once we have it, but let’s worry about that in future generations and just capture it now.

It’s very similar to what Hollywood has found with a lot of their film stock. A lot of it was just being lost or destroyed, but there is so much cultural history in those records.

Yes, exactly.  So, prior to being on the board, I set up a workshop at the museum looking at the preservation of classic software.  I wrote to 500 of my ‘closest friends’… people ranging from Marvin Minsky [cognitive scientist in the field of AI] to some other developers I knew, and everybody in between, and asked: “What software would you preserve for future generations?”

We came up with a long list.  I think that very idea inspired Len Shustek, who’s the president of the museum, to invite me on to be on the board of trustees.

What is your favourite exhibit in the museum?

I like the [IBM] 1401 reproduction.  They have a couple of 1401 machines and they’ve gotten them running again.  It’s fun to be in a place where there is something dynamic and alive and runs and you can be in the midst of it.  Just walking into the room, you smell old computers; and that’s a pretty cool kind of smell.  So, is the fact it’s running and clacking away.

The 1401

Fred Brooks [IBM software engineer] and I had an interesting discussion once, in which I lamented the fact that our computers make no noise, because – and I know I sound like an old guy, but – I remember you could hear some of the earlier computers I worked on. They were clattering in one way or another, be it their hard drives or their tapes, and you could get a feel for where the program was just by listening.

You can’t do that now with our machines; they are all very, very quiet. So, the 1401 exhibit has this wonderful visceral immersive display, in which you hear it and smell it as it processes.

I’ve actually seen people get a little misty-eyed just thinking about a dial-up tone, and you certainly seem to have some ‘misty memories’ too.  But, let’s look forward now.  What new things do you think may be exhibited in ten years time.

I think that’s the next interesting challenge.  We know how to display physical things, but there aren’t that many more things like old machines, to collect because they are disappearing.

If you go to the exhibits, you’ll see things get smaller and smaller and there is more of an interest in software.  I think the interesting problem for the museum to attempt is: how do we present software to the general public so that we open the curtain on it and show some of the magic and the mystery therein.  I think software can be very beautiful, but how do I explain that to someone who can’t see software. That’s an interesting challenge.

You’ve got to look at it it like an art form.  Source code, especially some of the well-written stuff, looks physically beautiful; forget about what it actually does.  There are many different dimensions you can look at try to get people’s interest.

[Editors Challenge to artists: here is a piece of code I’ve ‘mucked about with’ 

– why not see what code inspires you to create and send us a picture, which we’ll share with our readers, Grady Booch and the Computer History Museum!]

I think it’s very much like modern art because you can look at a bit of an impressionistic painting and you may not get it. Often the reactions are: “My kid could do that kind of thing.”

Well, not exactly; because the more you learn about it, the more you learn how much that painting – or whatever the art form is –  speaks to you and tells you stories.  It requires a little bit of education.

There is a visceral reaction at first to some art but the more you know about it, the more you can appreciate its subtlety.  I think the same is true of software.  We (the museum) have collected the original source code to Mac Paint, which turns out to be a really beautiful piece of software.

I’m using a phrase here that has meaning to me – beautiful – but requires explanation to the general public to say: why is this a beautiful piece of code, why does it look so well-formed?  I think that’s a responsibility we have as insiders to explain and teach that kind of beauty.

What are your thoughts about the emerging trends in Innovation and technology?

Well, the web has been an amazing multiplier, and yet at the same time it’s also increased the noise.  Therefore, the ability to find the real gems in the midst of all this madness is increasingly challenging.  For example, with the computing project  [COMPUTING: The Human Experience] we’ve done, we crowdsourced some initial seed funding for our work.

We could not have done this in the past without something like the web.  We put this appeal out to the world and it gave us access to people, otherwise we could not have done it.  I think the web has produced an amazing primordial soup of ideas into which we can tap; and that is so game-changing in so many ways.  That’s probably the biggest thing. [You can contribute to and volunteer for the project here.]

The web has changed everything; and those who don’t keep up are doomed to be buggy web producers.

Yes, exactly.  Or companies like Kodak.

I had the opportunity to speak to Kodak’s developers about 15 years ago.  It was a small group of people who were in the computer side of Kodak, and I remember saying to them: “Look guys, the future of Kodak is in your hands… so, what are you going to do about it?”

I Tweeted about it not too long ago with a sort of “I told you so.”  And yet, I don’t know whether or not it was inevitable.  It could be the case that some businesses simply die because they just don’t make sense any more.

And they should die sometimes.  But I think early IBM was a good example of a company that understood what business it was in.  I don’t think Kodak really understood what business it was in, towards the end, and that’s what killed it.

I agree, very much so.

Some web business models are founded on the idea that a company has a right to use and profit from an individuals data and personal information… What are your thoughts on that? Do you think that that’s a business model that’s sustainable? I believe that the general public is wising up to this very quickly and are soon going to expect some recompense from the use of their data.

I think there is a local issue and there is global issue that is even harder to tackle.  In the case of the Facebooks and the Twitters of the world, the reality is when I subscribe to those services, I do have a choice – I can chose whether or not to use them.  And, by the very fact that I’m using those services means I am giving up something in the process.

So, why should I be outraged if those companies are using my data, because I’m getting those services for free.  It seems like a reasonable exchange here, and I, as an adult, have the responsibility of choice.  Where it becomes nasty is when I no longer have choice; when that choice is taken away from me.  That’s when it becomes outrageous: when my data is being used beyond my control [in a way] that I did not expect.

I think that will sort itself over time; capitalism has a wonderful way of sorting things.  It’s also the case that we have a generation behind the three of us who are growing up, if not born, digital.  They have a very different sense of privacy, so, I’m not so concerned about it. We have lots of ‘heat and smoke’ but it will resolve itself.

What I find curious is that the ‘heat and smoke’ and discussions are hardly any different from what was initially said about telephones or, for that matter, the printing of the book.  Look at some histories of how phones were brought into the marketplace and you’ll find almost identical arguments to those that are going on today.

I trust the human spirit and the way capitalism works to find a way.  What’s more challenging is the larger issue, and that is the reality that there are connections that can be made in the presence of this data that are simply beyond anybody’s control.

I may choose to share some information on a social media source, or I may use a credit card or whatever, but the very act of participating in the modern society leaves behind a trail of digital detritus.  And I can’t stop that unless I choose to stop participating in the modern world.

I think this is a case where we’ll have politicians do some profoundly stupid things, and we’ll see lots of interesting cases around it.  But, we’ll get used to it.  I mean, people didn’t like the idea of putting their money in a bank for God’s sake, and we got used to it; I think the same thing will happen.

You brought up the Millennials – the digitised generation. What insights would you give them in being game-changers?”

Does any young adult ever want the advice of their elders?

I didn’t ask if they wanted it… 🙂

You know… I think, we laugh about it, but the reality is – and I think Jobs said it well: “Death is a wonderful invention because it allows us to get out of the way and let the next generation find their own way.”  I’m comforted by that; I find great peace in that notion.  They need to have the opportunity to fail and find their own way.  If I were born a Millennial, I’d be growing up in an environment that’s vastly different than mine.

Though, in the end, we are all born, we all die, and we all live a human experience in various ways, there are common threads there… the stories are the same for all of us.  I think those are the kinds of things that are passed on from generation to generation, but everything else is details.

I would not be surprised if the structuring of their brain is different to ours.  I’ve been talking to guys that are 10 – 15 years younger than me, and the ability to hold their train of thought over weeks or months – when you’re doing some serious development or research – they seem to find that extremely difficult.  So, I wonder if we’ll see any really big innovations coming through from those generations.

You could claim that it’s not just the web that’s done that, but it’s back to Sesame Street and the notion of bright, shiny objects that are in and out of our view in a very short time frame.  Certainly I think a case can be made that our brains are changing; we are co-evolving with computing – we truly are.

But, at the same time, throw me in the woods and I couldn’t find my way out of it easily; I can’t track myself well, I can’t tell you what things are good to eat and what things aren’t.  Those are survival skill that someone would have needed to have had a century or two ago.  So, my brain has changed in that regard, just as the Millennials’ brains are changing. Is it a good thing? Is it a bad thing? I’m not at a point to judge it, but it is a thing.

End of Part Two.  Part Three will be published next week – sign up for the blog and it will be delivered directly to your inbox!

You can learn more about Grady via the COMPUTING: The Human Experience website, Grady’s blog and his Twitter feed. Be sure to keep your eye on the COMPUTING: The Human Experience YouTube channel where Grady’s lecture series will be posted.

[Kim, Michael and Grady Skyped from their homes in Sydney and Hawaii.]

Zen and the Art of Software: The Innovation Interview with Grady Booch (Part 1)

One of the greatest things about ‘Flat World Navigating’ the internet, is that it enables connections with fascinating minds, even if from a distance.  If you are able to then reach out to those magnificent minds and invite them to have a chat – the encounter can be transformational.  Such was the case with Grady Booch, who is, I believe, a most genial genius – a man who brings Zen to Art of Software.

Grady Booch: Capital I Interview Series – Number 14

I first encountered Grady Booch via his project, COMPUTING: The Human Experience, “a transmedia project engaging audiences of all ages in the story of the technology that has changed humanity.” I was immediately hooked on the concept, and wanted to discover the mega-mind who thought to pull this off.

In the pantheon of world-famous computer scientist’s, Grady Booch is the star who co-authored the Unified Modeling Language (UML) and was one of the original developers of object-oriented programming (OOP). That alone would be immensely impressive, but it is far from the end of Grady’s long list of credits, which include being an IBM Fellow (IBM’s highest technical position) and Chief Scientist for Software Engineering at the IBM Thomas J. Watson Research Center.

In fact, he’s quite a fella, being a fellow the Association for Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers (IEEE) and the World Technology Network (WTN) as well as being a Software Development Forum Visionary and recipient of Dr. Dobb’s Excellence in Programming Award and three – yes three! – Jolt Awards .

There is a rumour (one which he doesn’t discuss), that Grady was approached to takeover from Bill Gates as by Microsoft’s chief software architect.  What is not a rumour, and what Grady does admit to, is that he taught himself to program in 1968 and had built his first computer a year earlier – at the age of 12.

He is the author of six books, hundreds of articles, and papers that originated in the term and practice of object-oriented design (OOD) and collaborative development environments (CDE), and the Booch Method of Software engineering. Grady serves on the advisory board of the International Association of Software Architects (IASA), the IEEE Software editorial boards and the board of the Computer History Museum.

Yes, with all that (and more) to his credit, Grady could quite comfortably sit on his laurels, and yet, instead he is the author, narrator and co-creator of what could be seen as a historical magnum opus of the technological world, COMPUTING: The Human Experience.

“At the intersection of humanity and technology is COMPUTING. From the abacus to the iPad, from Gutenberg to Google, from the Enigma machine designed to crack the codes of the Nazi SS to the Large Hadron Collider designed to crack the code of the universe, from Pong to Halo, we have created computing to count the uncountable, remember beyond our own experience, touch the invisible and see the unforeseeable. COMPUTING: The Human Experience is a brilliant and surprising insider view of the hidden stories of passion, greed, rebellion, rage and creation that created the technologies that are everywhere, transforming our world, our lives, and who we are as a species.”

Grady is not alone in this endeavour, working as he does with a tremendous creative team which includes, among others: Grammy Award winner, Seth Friedman; President of the Computer History Museum, John Hollar; and psychotherapist/theologian/social worker Jan Booch, Grady’s wife, co-writer and co-creator of this obvious labour of love. The series will include lectures, books, videos, an interactive website, and much more.

February 24, 2012 sees Grady launch the first in a series of lecture series at the Computer History Museum in Mountain View California.  For those readers who are not lucky enough to be in the vicinity to attendWoven on the Loom of Sorrow: The Co-Evolution of Computing and Conflict’, I hope you will enjoy reading this multi-part Innovation Interview with Grady as much as Michael and I enjoyed talking to him!

Grady Booch: Capital I Interview Series – Number 14 

Grady, when I clicked on the link from your LinkedIn profile, I was extremely excited by the idea of the COMPUTING: The Human Experience and found it to be immensely interesting!  What made you feel that it was important to compute the human experience?

I think it has to do a lot of where I am in my life.  In the sense that I have nothing left to prove, if you will, and I could do what I want to do.  I could just happily fade away into an existence here.  But, I think part of it is wanting to give back to the community that has given so much to me; and being able to express to the general public my child-like joy and delight at what I do.  That’s why I think I chose to go down this path of telling the story.

In the end, I’m a story teller, and I think there is a story to be told here. There’s probably some other factors that happened that led me in this direction. Just random stories… A side conversation with one of our goddaughters…

We were talking to her about computing stuff, and she said:

“Oh, I know everything there is to know about computing. Because I’ve taken a class.”
“Oh, what did you learn?”
“Well, in my class we learned how to write a Word document and how to surf the web.”
I was like: “Oh, my gosh; there is so much more!”

It’s things like that that have led me to say… We’ve created this technology, and I’m responsible for helping create that technology, and we as a civilisation have chosen to step inside and live inside it. We’ve created a world and yet most of people in the world don’t understand it and can’t understand how to use it to their advantage.

I think my goal is: let’s open the curtain and explain some of that matter, and the mystery, beauty, excitement, and human stories that lead to it.

I think there is a lot of latent interest there, that is untapped at the moment.

I think so; I hope so.  Well, there is a lot of interest in anything.  Why do you think we still watch celebrities like Paris Hilton? It’s amazing what people get interested in.

But I think here is a topic that has profoundly changed humanity, and we are at the time and place where we can talk about it.  And the people who made these changes… many of them are still alive, so let’s get their stories and tell that to the world!

The phrase I often use is: “An educated populous is far better able to reconcile its past, reason about its present and shape its future.”  And I want to help contribute to educating that populous.

You don’t shy away from contentious topics, either. Such as: computing and war, computing and faith, and computing and politics. What are your thoughts on these subjects?

It’s interesting you called them controversial, because I see them as simply part of human experience.  The reality is that there are billions of people, a billion Muslims, a billion Christians, and lots of others who profess a faith of some sort.  So, to not talk about faith denies an element of the human experience; to not talk about war denies the existence of warfare.  It’s not intentionally controversial, it’s a recognition that this is part of the human experience, and that it’s reasonable for us to consider what role computing has played in it.

So, let’s take computing and war for example. This is the one that I’ll be giving my first lecture [Woven on the Loom of Sorrow: The Co-Evolution of Computing and Conflict] on at the Computer History Museum on February 24.  My premise is that war is part of the human experience, for better or worse.

By the way, a background you must recognise was that I trained to be a warrior.  I went to the Air Force Academy and I learned about war, and many of my classmates have killed people in anger in warfare.  It’s part of the life in which I have lived.

And yet, if you look at the parallel story of computing and warfare, the conclusion I draw is that computing was, at one time, a companion to warfare; it now is a means of warfare, and it’s quickly becoming a place of warfare.  I’d like to tell that story: an observation, from an insider, of how computing has both enabled and been shaped by warfare.

I think the average person would be surprised to know that your average smart phone, and a considerable amount of technology, exists simply because of what happened during the Cold War and World War II.

2012 is the centenary of Alan Turning's birth

There are surprises in those regards.  There are also some incredible personal stories. The tragic story of Alan Turing... [considered to be the father of computer science and AI]

Absolutely!

Who changed the course of World War II.  He saved a nation, and yet that very nation eventually condemned him because he was homosexual. Go figure!

Will the lecture be something that people around the world will be able to access?

Our intent is to make it available on our YouTube channel and the museum’s channel. And I believe the local PBS station, QED, has an interest in making it available on their channels as well.

Wonderful!

So, yeah, we’re going to see a wide distribution of this.  Ultimately, you can view this as the alpha (or beta) of what we’re trying to do with the series.  One of the main things we’d like to get out to the world is an eleven-part series for broadcast. This [lecture] is not the broadcast, but we’re talking about it and this is one of the lectures about it.

What is the end product, or goal, of the COMPUTING: The Human Experience project? Would you say that the series is the end product, or is it something that doesn’t necessarily have to have an end?

It won’t ever have an end because I hope we will develop a dialogue with the public that goes on far beyond this.

Look at Sagan’s Cosmos; it’s still being seen to this day.  I hope, and I certainly strive, to produce something as interesting and as timeless that.  So, I’ll put it in the terms of [political scientist] Herbert Simon:  ‘What our intermediate stable forms are‘…  We want to produce eleven one-hour episodes (that’s a big thing), have a book, an e-book, curriculum materials, some Aaps.  Those are the physical things we’ll actually be delivering.

To that end, you’ve already gone through one very successful Kickstarter funding round.  I’m sure there will be others, but, other than helping to fund the project, what can readers of the Innovation interviews do to help you, and the project, reach some of those goals?

I think there are two things: My wife Jan and I have self-funded this for the last four years, but we’ve now gone to funding, like with Kickstarter – the very process of doing a Kickstarter has brought a number of volunteers to us.  In the next few years, we need to raise about eleven-million dollars to pull this off.  We’re going to foundations, we’re talking to individuals, and we’ll continue on that path.

Grady and Jan Booch

In a recent interview with Grady, Darryl K. Taft noted, “Meanwhile, Jan’s role on the project is multi-faceted.  As a social worker, she attends to issues of multiculturalism, inclusivism and the impact computing has had on society.  As a psychotherapist, her focus is on how human desires and needs have shaped and continue to shape the development of computing technology.  As a theologian, her focus is on the moral and ethical issues found in the story of computing.  Finally, as a non-technical person, she assures that the stories will be approachable, understandable and interesting to the general public.”

Working on the book and lecture series allows us to continue story development in a very, very low-cost kind of way. So, one of the things that I hope people can do is to say: “Hey! I know a guy who knows a guy, who works for this person, and they may be interested.” I hope we can find some serendipitous connections to people with whom we can find some funding.

I know foundations within the US, but I don’t know what opportunities there are in other parts of the world; we’re telling a global story so I hope we can get some connections that way.

The second is: I hope that people will look at this and say: “This is interesting. I think you should tell this story or that story.”  And so I hope from this people will come to us and help inform us as to what they thing the world should know about.

[They hope to collect more than 2,000 human experience videos for their YouTube channel, so don’t be shy, make a video!]

Along with a magnificent creative team, you have an extremely eminent board for the COMPUTING: The Human Experience project. In particular, I must note Vint Cerf, who helped me kick off the Innovation interview series and really was integral in its initial success. How did you gather those people around you?

My philosophy is to surround myself with people far smarter than I am, because they know things that I will never know.  I want to be able to go to them for two reasons: one is as a source of information, and the second is as a source of contacts.

Tim O'Reilly

I reached out to this set of people and I’m going to be growing the board to around 20 or 30 total for people who have specific expertise and who have been game changers in certain domains.

I’ll give you a great example of how this has worked well: Vint, Tim O’Reilly and Mary Shaw have been particularly useful for me thus far, but for developing the lecture on computing and warfare, one of the people on my board is Lt. Gen. William Lord, who happens to be the Chief Information Officer and Chief War Fighting Officer of the Air Force.

Mary Shaw

He has helped me out because I wanted to get some information that simply doesn’t exist in ‘the literature’: what’s the current doctrine at the war colleges about the use of Predators… what are people thinking?  He put me in touch with people who have that source of information.

Lt. Gen. William Lord

Tim has been able to do similar kinds of things.  The computing community, at one level, is a relatively small community; we all kind of know all the movers and shakers.  Well, let’s get them to be a part of this, because I’m also celebrating their story!

You can learn more about Grady via the COMPUTING: The Human Experience website, Grady’s blog and his Twitter feed.

This is part one of a multi-part interview with Grady, be sure to look out for the next instalment – Part Two can be viewed here and part three here.

If you’re in the San Francisco area on the 24th of February, I heartily suggest you try and attend Grady’s lecture. If you, like me, are unable to attend, be sure to keep your eye on the COMPUTING: The Human Experience YouTube channel where the lectures will be posted.

[Note: the lecture has now been posted on the Computer History Museum YouTube channel.  Thanks  to John Hollar for letting us know!]

[Kim, Michael and Grady Skyped from their homes in Sydney and Hawaii.]


Antics with Semantics: The Innovation Interview with Semantics Pioneer, Ora Lassila

Wanting to speak to someone, both interesting and inspiring, about the Semantic Web and Innovation, Ora Lassila, an Advisory Board Member of the World Wide Web Consortium (W3C) as well as Senior Architect and Technology Strategist for Nokia‘s Location and Commerce Unit, was the obvious ‘go to guy’.

A large part of Ora’s career has been focussed on  the Semantic Web as it applies to mobile and ubiquitous computing at the Nokia Research Center (NRC), where he, among many things, authored ‘Wilbur’, the NRC’s Semantic Web toolkit.   As impressive as that is, as I did my research, finding out more about Ora, the more fascinating he, and his career, became to me.

Ora is one of the originators of the Semantic Web, having been working within the domain since 1996.  He is the co-author (with Tim Berners-Lee and James Hendler) of the, to date, most cited paper in the field, ‘The Semantic Web’.  Ora even worked on the knowledge representation system ‘SCAM’,  which, in 1999, flew on a NASA Deep Space 1 probe.

Leading up to our attendance and presentation at the Berlin Semantic Tech and Business Conference, Michael– the true ‘tech head’ of KimmiC – and I were extremely pleased that Ora, ‘the Mac Daddy’ of the Semantic Web, gave us so much of his time.   I hope you find our conversation with him as interesting as we did!

[I’ve italicised Michael’s questions to Ora so you are able to differentiate between us – though, I think it will become obvious as you read – lol!]

Ora Lassila (photo credit: Grace Lassila)

Ora Lassila: Capital I Interview Series – Number 13

Lets start out by talking about Innovation in general, and we’ll move on to the Semantic Web as we go along.   As this is the Innovation Interview Series, the ‘baseline’ question is always: how do you define Innovation?

Good question.  I think many people do not make a clear distinction between ‘innovation’ and ‘invention’.

To me, ‘innovation’ is something that not only includes some new idea or ideas, but also encompasses the deployment and adoption of such.  You can invent clever new things, but if you don’t figure out how to get people to use those new things, you have fallen short of the mark.

How essential has innovation been in your career to date; and how important do you envisage it being, going forward?

It has been important.  A big part of my professional career was spent in a corporate research lab, where inventing new things was less of a challenge than getting these inventions ‘transferred’ to those parts of the corporation that had more capability in promoting their adoption and deployment.

That said, I have learned that ‘technology transfer’ is not always about taking concrete pieces of technology, software for example, and handing them over to someone else for productization.  Sometimes the transfer is more ‘insidious’ and involves influencing how people in your organisation – or outside your organisation – think and see the world.

I would claim that some of my early work on the Semantic Web absolutely fits this definition.  So writing, publishing and talking all constitute viable means.  Also, we should not forget that people need to be inspired.  You cannot just tell them what to do, instead, they have to want to do it.

What do you think are the main barriers to the success of innovation?

I am not kidding when I say that the absolute biggest obstacle is communication.  That is, we should learn to communicate our ideas better to be able to convince people and to inspire them.  I have much to learn in this area.

Who and what inspires you? Where do you look for inspiration?

I have no good or definite answer for that.  When I was younger I was really inspired by the Spanish aviation pioneer Juan de la Cierva whose simple yet radical idea about aircraft – the ‘autogiro’ – paved the way for the adoption of helicopters.  And yet, one might argue that, in many ways helicopters are a far more complicated and complex technology than de la Cierva’s original invention.

Juan de la Cierva y Codorníu, 1st Count of De La Cierva

I am inspired by simplicity… I strive to create and design things that are simple, or at least not any more complicated than necessary.

What are, in your view, the current emerging critical trends in Innovation and technology?

I like openness, things like open-source software as well as Open Access and sharing of data as part of the scientific process.  I am hoping we see a fundamental change in how research is done.  In many ways we have progressed to a point where many problems are so complex that they are beyond a single researcher’s or research group’s capacity and capability to tackle.

Also, on the topic of openness, I like some of the recent developments in open government, e-Government, and such.

And what are some of the coolest mobile technologies you’re seeing launched? 

I am much enamoured with the idea that mobile technologies – particularly via the use of GPS, etc. – ‘ground’ many services to the physical world.  There are many uses for location information, uses that help me in my everyday life.

Furthermore, by making the mobile device better understand the current ‘context’, not only geographically but also by making use of other observations about the physical world (movement, sound, etc.), we can make applications and services better for users.

Do you think we will have a ‘meshed up’ world that effectively bypasses the stranglehold telcos have on infrastructure?

I don’t necessarily agree that the telcos have a ‘stranglehold’.   They provide an important service and a critical investment in an infrastructure I don’t really see us living without.

But we need things like ‘net neutrality’ to make sure that this infrastructure really serves people in an open and non-discriminatory way.  in this regard I am also concerned about more recent legislative attempts [SOPA, PIPA, ACTA] that (perhaps unintentionally) will hurt the overall technical function of the Internet.

It seems that current Web based business models are founded on the idea that businesses have the right to record everything about users/consumers and profit from this information.  Do you think this is a sustainable business model, or do you think the user/consumer will start to think that they, and their data, is worth something and begin to demand recompense of some sort?

There are very few fundamentally different, viable, business models on the Web, so I can see that businesses would want to cash in on user data.  It is only a matter of time before the consumers ‘wise up’ and understand the value of their own data.  Personally I think we should aim at ‘business arrangements’ where all parties benefit.  This includes concrete benefits to the user, perhaps in a way where the user is a bona fide business partner rather than just someone we collect data about.

It is important to understand that what’s at stake here is not only how some user data could be monetized, it is also about users’ privacy.  Luckily I work for an organisation [Nokia] that takes consumer privacy very seriously.

You’ve got a fascinating history, and seem to have gotten into the Semantic Web at the very beginning.

The very, very beginning, yes.  I think I can argue that I’ve been doing this longer than the term has actually existed.

In ’96 I went to work at MIT…  I’d just been hired by Nokia, and they wanted to send somebody to MIT as a kind of visiting faculty member.   So, I worked in Tim Berners-Lee’s team, and one day he asked me what I thought was wrong with the web.

Tim Berners-Lee

Just a small question.

Yeah, not intimidating at all.

I said: “My hope has been to be able to build,” – what then would have been called agents, autonomous agents – and I said: “I can’t really do that because the web was built for humans and human consumption.  I would really, really like to see a web that was more amenable for consumption by automated systems.”

And he [Berners-Lee] said: “Yeah, that’s it! Now, how do we fix that?”

And I went: “Well, how about we try knowledge representation and apply that to web technologies.”  Because knowledge representation is a branch of artificial intelligence that has a long history of taking information and representing it in such a way that you can reason about it then draw conclusions from it… things like that.  We agreed that I would look into that, and that’s really how I got into all this.

Of course I had worked on various projects before that, that involved ontologies and knowledge representation, it just wasn’t done on the web.   The big reason being that the web had not really been invented yet.

There was Cyc and some other AI [Artificial Intelligence] things before that… 

Cyc is a very good example of an attempt to build a very large ontology that would encompass common sense knowledge.  But there are many examples of systems that used ontologies in one way or another for narrower domains.  Cyc was an overly ambitious project, in the sense that they really wanted to cover a lot of ground in terms of human knowledge.

I had worked on several projects in the past that applied ontologies to things like planning industrial production, or planning logistics.  So, the question really was, could you build a model of the world that was rich enough and precise enough that a system could use that knowledge to create plans for various things.  In my case those were plans for either how to run industrial production, or large fleets of logistics’ resources.

You were a long, long way in front of everybody else… at least ten years.  It’s incredible!

One might argue too far ahead.

I think at that time most people were just trying to come to grips with basic HTTP and web servers.  If you look at the vested interests, especially of software providers at that time… I guess it wasn’t really the right timing. But I think that time is coming now.

Yeah, I think we’re in a better position now and we’ve certainly seen a lot of adoption of Semantic Web technologies in the last few years.

I think elements of semantic are brilliant.   RDF, for example, is one of the smartest ways I’ve ever seen of describing something.  You can’t break the way semantics talks about something, whereas you can break the interpretation easily in XML.

I start to lose traction with it when it gets towards ontologies.  Do you think that ‘splitting the message’ would help with adoption?  For instance, you can use ontologies, but there is also a part of semantics which is brilliant for just doing ‘business as usual’?

I think there is a fairly broad spectrum of possible ways of making use of this technology.  I’m sure you’ve seen diagrams of the so called layer cake, with the different technologies layered on top of one another.

A Semantic Web Stack (layer cake) [image created by Tim Berners-Lee

I think that it’s up to you to decide how far up that layered structure you want to go.  There are a lot of applications where very simple use of just some of the most basic technologies will give you a lot of benefit.  And then there are other problems where you may actually want to separate a lot of the understanding of your domain from your actual executing code…  for those kinds of things, encapsulating that knowledge in the form of a potentially very complex ontology may be a good way to go.

My issue with ontologies is exactly the same issue I have with the current enterprise software providers… If you talk about mass adoption, as opposed to just specific domain adoption, for every extra entity – be it a class or data table – you decrease your adoption exponentially.   And, once you go up to higher levels, you shouldn’t assume you’re the only person that has a valid way of looking at the world, though you may be using the same data.  I think we’re saying the same thing…

Absolutely.  The interesting thing to say about the current enterprise software providers, I think, is that they have one model of the way to look at the world.   There are cases where companies have had to change the way they do business in order to adopt the enterprise software [currently available].

You have two choices: you either do it their way or else you spend a few million bucks and you do it their way anyhow.

I think that there is a possibility, with these Semantic Web technologies, of getting into more flexible uses of information and I kind of like that idea.

Over the last few years I’ve become increasingly interested in our ability to share information.  When you start talking about sharing it becomes really dangerous to have very complex, strictly defined semantics.  Because, like you said, other people might have a different interpretation of things.

But you want to nail some things down.  Understanding something about [the] information would give you a baseline for interoperating.  And then, you could do ‘better’ interoperation if you had a better definition of the meaning of the information.

I agree with you about understanding information.  But I think where most things fall to pieces – and this is also looking at business model languages and stuff – as soon as you get anywhere near processes with that information, it goes to hell pretty quickly. 

Exactly.  I spent a few years, at the beginning of the previous decade, working on a large Semantic Web research program funded by DARPA [Defense Advanced Research Projects Agency].  I was part of an effort to see if we could use ontological technologies to model web services.

Is that DAML and stuff like that?

Exactly; DAML, and DAML-S for services.  We very quickly got into process modeling; and those kinds of things get very difficult…

Very quickly.

Absolutely.  I think that’s the thing that still needs work.

The traditional approach to anything process-oriented just doesn’t work unless you have very tight coupling and a very controlled domain.  But I think there are a lot of different ways of trying to solve the same problem without having to get to that level.

I think that one of the things that is missing from the whole Semantic Web collection of specifications is this notion of action… a notion of behaviour.  It’s hard to model, but I think that we ought to work on that some more.

We [KimmiC/FlatWorld] have taken a more hybrid approach, so we use things like REST architecture, and a lot of stuff from the business world, in terms of authentication and authorisation. 

Sure.  I’m not in any way advocating the use of the WS_* collection of technologies. I’m not a big fan of those.

I’ve looked at all the SOAP stuff and there are a lot of problems… like business process deployment.  It is a nightmare to deploy these technologies.  It’s even more of a nightmare to load balance them.

Right.

Essentially, if you’re looking for dynamic relationships – be it in business or whatever – they’re just useless for that sort of thing.  They’re always designed around having control of a large domain space; this is especially true when it comes to deployment of applications.  I just think they’ve missed the point. 

I think the web is the best example of a redundant, massively-distributed application; and we need to look at it more as, “That’s the model,” and we have to work with it.

Absolutely.  I think that for 20 years there have been discussions about these sorts of ad hoc enterprises, or collections of smaller companies, being able to very quickly orchestrate themselves around a particular mission [purpose].  But I think that these technologies, just like you said, are probably not the right answer.

When you wrote your 2009 position paper you noted that rather than languages, the  biggest issues or problems facing the uptake of the Semantic Web were 1. Selling the idea; and 2.  A decent user interface.

Why did you feel that was the case then; and, has your opinion changed regarding these issues in the two+ years since you wrote your paper? 

Semantic Web technologies are well suited to situations where you cannot necessarily anticipate everything – say, about the conditions and context in which an application is used, or which kind of data an application might have available to it.  It is like saying that this is a technology for problems we are yet to articulate.  Sounds like a joke, but it isn’t, and the problem in ‘selling’ Semantic Web technologies is often about the fact that once a problem has been clearly articulated, there are many possible technologies that can be used to solve it.

The issue I have with user interfaces and the user experience is the following: Semantic Web technologies – or more generally, ‘ontological’ technologies – give us a way to represent information in a very expressive manner… that is, we can have rich models and representations of the world.  I feel that user interface technology has a hard time matching this expressiveness.  This issue is related to what I said earlier about not being able to anticipate all future situations; writing software that can handle unanticipated situations is hard.

All that said, I don’t like the term ‘Semantic Web applications’.  Users shouldn’t have to care, or need to know, that Semantic Web technologies were used.  These are just useful things in our toolbox when developing applications and services.

What are the key challenges that have to be solved to bring those two problems together?

I am really looking for new programming models and ways to add flexibility.  This is not only a technical problem, we also need to change how people think about software and application development.  I have no silver bullets here.

How do you see applications developing in the next few years – compared to the current environment – as you have mention we have to shift our minds from an application that ‘owns and controls’ it’s own data rather than simply interacting with data?

I think, again, this is about changing how people think about application development.  And, more specifically, I would like to see a shift towards data that carries with it some definition of its semantics.

This was one of the key ideas of the Semantic Web, that you could take some data, and if you did not understand it, there would be ‘clues’ in the data itself as to where to go to find what that data means.

As I see it, the semantics of some piece of data either come from the relationship this data has with other data – including some declarative, ‘machine-interpretable’ definition of this data, for example, an ontology – or are ‘hard-wired’ in the software that processes the data.  In my mind, the less we have the latter, and the more we have the former, the better.

In previous interviews you’ve noted that you feel users should have a say “in how they view  information.”  Do you think that users should become involved in making the semantic web more ‘usable’? And if so, how?

I think users should demand more.  There needs to be a clear ‘market need’ for more flexible ways of interacting with information.  User experience is a challenge.

On this topic, I also want to point out how unhappy I am with the modern notion of an ‘app’.  Many apps I have seen tend to merely encapsulate information that would be much better offered through the Web, allowing inter-linking of different content, etc. It kind of goes with what I said earlier about openness…

There’s a lot of guys saying they can plug two systems together easily, but it almost always means at the data level.   It doesn’t really work once you start applying context on top of it.

I’d like to see a middle ground where we have partial interoperability between systems, because that’s how humans interact.

That’s something we’re looking at as well.  I view it like this: when I go through Europe, I can speak a little bit of German, a little bit of French. I’m not very good, but I have to have a minimal level of semantic understanding to get what I want: to get a beer.  I don’t have to understand the language completely, just enough, in context, to act on it.

Speaking of acting on things… Ora, where are you going with semantics in the future?

That’s a good question. Right now I’m working on some problems of big data analytics.

With semantics?

Nokia is investing in large-scale analytics, so I’m in the middle of that right now.

I’m currently looking at how to tackle the problem of how to bootstrap behaviour.  Behaviour and notions of action are not well-tackled in the space of the Semantic Web, and I’d really like to get into bringing two information systems in contact with one another, and have them figure out how to interoperate.

That’s very ambitious.

Right.  And I’m not entirely sure if people understand that that’s an important question to tackle.

Oh, it’s an important question to tackle; it’s just more a question of… Again, you’re very far ahead of the game.

Well, I think that today, if you want to make systems A and B interoperate, it’s usually a large engineering undertaking.  So, it’s directly related to the question of separating information from applications…  you could pick the applications you like and take the information that you’re interested in and make something happen.  In terms of interoperating systems, right now we have a situation where we either have full interoperability, or we have nothing… we have no middle ground.

You can learn more about Ora via his website, blog and  Twitter feed.

[Kim, Michael and Ora Skyped from their homes in Boston and Sydney.]

[This interview has been translated into the Serbo-Croatian language by Jovana Milutinovich of Webhostinggeeks.com]