Tag Archives: KimmiC

Zen and the Art of Software: The Innovation Interview with Grady Booch (Part 3)

In parts one and two of our chat with  software star Grady Booch, we discussed his magnum opus project  COMPUTING: The Human Experience, Innovation, the Computer History Museum and the possible changing brain structure of Millennials, among many other things.

In this, the final segment of our discussion with him, we look at software – and software architecture – in general, Grady’s relationship with it in particular, the troubles facing Google and Facebook, the web, and his views on the SOPA, PIPA and ACTA bills.

To view the full introduction to this multi-part interview with Grady: Click here

Grady Booch: Capital I Interview Series – Number 14

[This was a joint conversation between Grady, Michael and myself. I’ve italicised Michael’s questions to Ora so you are able to differentiate between us – though, I think it will become obvious as you read – lol!]

Grady, You are credited with the building, writing and architecting of so much technology;  of all of those things, what is it that you are most proud of?

There are three things.  The first one is very personal.  My godson – he would have been eight or nine at the time –  was given a task at his school to write about a hero, and he wrote about me.  That was pretty cool!  Everything else is details, but I’m really proud of that one.

On a technical basis, I’m pleased with the creation of the UML, not just because of the thing we created, but the whole idea and industry around it that being able to visualise and reason about software-intensive systems in this way is a good thing. So, I think we lifted the tide for a lot of folks.

UML Diagrams

I contributed to the notion of architecture and looking at it from multiple views, and how to represent it.  I feel good about the whole thing around modelling and architecture and abstraction.  I think I helped people and I feel good about that.

UML was certainly a game changer.  I remember when it came in, before you got bought up by IBM.  It was like a wave going across the globe.  It made a profound difference.

And it’s different now because it’s part of the oxygen.  Not everybody is using it, that’s okay – not everybody is using C++ or Java and that’s fine – but I think we changed the way people think.

Our estimates are that UML has a penetration of somewhere around 15 to 20 percent of the marketplace.  That’s a nice number.  We’ve changed the way people build things.

Absolutely, especially at the big end of the market.

Yeah.  I wrote an article in my architecture column, that tells the story of when I was dealing with my aneurysm.  I was laying in a CT scan machine in the Mayo Clinic, looking up and saying: “My gosh, I know the people who wrote the software for this, and they’ve used my methods.”  That’s a very humbling concept.

It’s a pretty much a pretty good Acid test, isn’t it.

Yes, it is.

And your work is continuing in architecture…

Correct. I continue on with the handbook of software architecture, and a lot of what I do, in both the research side and with customers, is to help them in the transformation of their architectures.

For IBM the last nine months or so I’ve been working with the Watson team – the Jeopardy playing game – and helping the teams that are commercialising their technology.

How do you take this two-million line code base, built by 25 men and women, drop it in another domain and give it to a set of people who know nothing about that system.  This is exactly the kind of stuff that I do for customers, and I’ve been helping IBM in that regard.

That would be very challenging.  You’d need somebody with your brain power to actually manage that, I imagine.

Well, it’s not an issue of brain power, it’s an issue of: how does one look at systems like this and reason about them in a meaningful way.  And after the UML comes in – because it allows us to visualise it and the whole notion of architecture as used from multiple dimensions – all these things come together.  That make a two million line code base understandable to the point where I can know where the load-bearing walls are and I can manipulate them.

That is pretty impressive!  You’ve found a way of managing the slicing and dicing of the codebase.

That’s a problem that every organisation faces.  I have an article that talks about the challenges Facebook is going to have.  Because they…. every software-intensive system is a legacy system.  The moment I write a line of code, it becomes part of my legacy…

Especially if you’re successful upfront and gets massive growth, like they did.

Yes, and having large piles of money in your revenue stream often masks the ills of your development process.

Absolutely.

Google’s faced that, Facebook is facing that.  They have about a million lines of [the programming language] PHP that drives the core Facebook.com system – which is really not a lot of code – still built on top of MySQL, and it’s grown and grown over time.

I think, as they split to develop across the coast – because they’re opening up a big office in New York City – that very activity changes the game.  No longer will all of the developers fit within one building, so the social dynamics change.

Inside Facebook's Madison Avenue Offices

Ultimately, what fascinates me about the whole architecture side of it is that it is a problem that lies on the cusp of technology and society.  It’s a technical problem on the one hand – so there are days I’ll show up as an uber geek – and on the other hand, it’s a problem that’s intensely social in nature, so I’ll show up as a ‘Doctor Phil’.

To follow-up on one of Kim’s questions: if you look at the backlog of IT, I think every company of moderate size is still struggling to deliver on business demands. Do you think that architecture helps or, does it actually contributes to the problem?

Architecture can help in two ways.

I’ll give you one good example.  There is a company called OOCL (Orient Overseas Container Line) that I worked with some years ago to help them devise an architecture for their system that tracks containers and all these kind of things.  Their CEO had this brilliant notion: what would happen if we were to take this system and extract all of the domain-specific bits of it and then sell that platform?

By having a focused-upon architecture, they were able to devise a platform – this is a decade before Salesforce.com and these kind of things – and they could then go into a completely new business and dominate that side of the marketplace.   Here is an example where a focused-upon architecture has a real, material, strategic business implication.

The other thing focused-upon architecture offers is, it  is allows you to capture the institutional memory of a piece of software.  The code is the truth, but the code is not the whole truth.  So, in so far as we can retain the tribal memory of why things are the way they are, it helps you preserve the investment you made in building that software in the first place.

What sort of size company are you talking about?  It sounds like the telco space… large Tier 1 and  Tier 2 companies. 

It could be anybody that wants to dominate a particular business.  Salesforce.com built a platform in that space.  Look at Autostar as another example.  Autostar was an attempt by BMW, and others, to define a common architectural platform, hardware and software, for in-car electronics.  By virtue of having that focused-upon architecture, all of a sudden you have unified the marketplace and made the marketplace bigger, because now it’s a platform against which others can plug and play.

There is a similar effort  with MARSSA, which is an attempt to develop a common architectural platform for electronics for boats and yachts.  Again, it eliminates the competition of the marketplace by having a set of standards against which, people can play well together.  In the end, you’ve made the marketplace bigger because it’s now more stable.

I agree. Also, an architectural approach separates the data from an application specific way of looking at things.

It used to be the case that we’d have fierce discussions about operating systems.  Operating systems are part of the plumbing; I don’t care about them that much anymore.  But, what I do care about is the level of plumbing above that.

My observations of what’s happening is that you see domain-specific architectures popping up that provide islands against which people can build things.  Amazon is a good example of such a platform.  Facebook could become that, if they figure out how to do it right – but they haven’t gotten there yet.  I think that’s one of the weaknesses and blind spots Facebook has.

I also think that they are, to a certain extent, a first generation.  I think the web, in terms of connectivity, is not being utilised to its fullest potential.  I don’t see any reason why, for example, any form of smart device shouldn’t be viewed as being a data source that should be able to plug in to these architectures.

Exactly!

Would that be an example of a collaborative development environment?

Well, that’s a different beast altogether.

With regards to collaborative development environments, what led me to interest in that space is emphasising the social side of architecture.  Alan Brown [IBM engineer and rational expert] and I wrote a paper on collaborative environments  almost ten years ago, so it was kind of ahead of its time.

Alan Brown

The reason my thinking was in that space was extrapolating the problem of large-scale software development, as we’re becoming more and more distributed, to just how does one attend to the social problems therein.  If I can’t fit everybody in the same room, which is ideal, then what are the kinds of things that I do that can help me build systems.

I’ve observed two things that are fundamental to crack to make this successful.  The first is the notion of trust: in so far as I can work well with someone, it’s because I trust them.  You, Kim, trust your husband Michael, and therefore there is this unspoken language between the two of you that allows you to do things that no other two people can do together.

Now, move that up to a development team, where you work and labour together in a room, where you understand one another well.  The problem comes – like with Facebook, and what we’ve done in outsourcing – when you break apart your teams (for financial reasons) across the country or across the world.  Then, all of a sudden, you’ve changed the normal mechanisms we have for building trust.   Then the question on the table is: what can one do to provide mechanisms to provide building of trust?  That’s what drives a lot of ideas in collaborative development environments.

The other thing is the importance of serendipity – the opportunity to connect with people in ways that are unanticipated, this option of ‘just trying things out’.  You need to have that ability too.  The way we split teams across the world doesn’t encourage either trust or serendipity.  So, a lot of ideas regarding collaborative environments were simply: “What can we do to inject those two very human elements into this scheme?”

As we have been talking about trust, I’m curious as to your opinion on the SOPA, PIPA and ACTA bills.

I’ve Tweeted about it, and I’m pretty clear that I think those bills are so ill-structured as to be dangerous.

I get the concept, I understand the issues of privacy and the like, and I think something needs to be done here.  But I’m disturbed by both the process that got us there and the results.  Disturbed by the process in the sense that the people who created the bills seemed to actively ignore advice from the technical community, and were more interested in hearing the voices of those whose financial interest would be protected by such a bill.

The analogy I make would be as if all of a sudden you make roads illegal because people do illegal things in their cars.  It’s stupid the way the process that led up to this bill was set, I think, because it was very, very political.  From a technical perspective, while I respect what needs to be done here, the actual details of it are so wrong – they lead you to do things to the web that are very, very destructive indeed.  That’s why I’m strongly, strongly opposed to it. And I have to say that this is my personal opinion, not that of IBM, etc.

This is the final segment of our multi-part interview with Grady Booch. Part One can be read here, and Part Two can be read here

You can learn more about Grady via the COMPUTING: The Human Experience website, Grady’s blog and his Twitter feed. Be sure to keep your eye on the COMPUTING: The Human Experience YouTube channel where Grady’s lecture series will be posted.

[Kim, Michael and Grady Skyped from their homes in Sydney and Hawaii.]

Zen and the Art of Software: The Innovation Interview with Grady Booch (Part 2)

In the pantheon of world-famous computer scientist’s, Grady Booch is the star who co-authored the Unified Modeling Language (UML) and co-developed object-oriented programming (OOP). He is a Fellow of  IBM, ACM, the IEEE, the author of six books, hundreds of articles and papers and the Booch Method of Software engineering. Grady serves on several boards, including that of the Computer History Museum and is the author, narrator and co-creator of what could be seen as a historical magnum opus of the technological world, COMPUTING: The Human Experience. 

To view the full introduction to this multi-part interview with Grady, and Part 1 of the series: Click here

Grady Booch: Capital I Interview Series – Number 14

[This was a joint conversation between Grady, Michael and myself. I’ve italicised Michael’s questions to Ora so you are able to differentiate between us – though, I think it will become obvious as you read – lol!]

Grady, lets begin with the very basics. As this is the Innovation Interview Series, let’s start with: how do you define innovation?

Ecclesiastes 1:9 has this great phrase:

What has been will be again.  What has been done before will be done again.  There is nothing new under the Sun“.

The way I take it is that innovation – really deep innovation – is about warming the Earth with the Sun of your own making. And to that end, that’s how I distinguish the ‘small i’ from the ‘Big I’.

The ‘small i’ therefore means: I may have a brilliant idea and it warms me, but the ‘Big I’ Innovation is where I can start warming others.  There are new suns possible; there are new ways of warming the Earth… And I think innovation is about doing so.

One of my heroes is the physicist Richard Feynman. If you read any of his stuff or watch his physics lectures – which are just absolutely incredible [Ed. Note: As is his series: The Pleasure of Finding Things Out] – there are some conclusions you can draw (and there is a delightful article someone wrote about the nine things they learned from Feynman.  The way I frame it is to say that I admire him and his innovation because he was intensely curious but at the same time he was bold, he was not fearful of going down a path that interested him. At the same time (too) he was also very child-like and very, very playful.  In the end what really inspires me from Feynman’s work is he was never afraid to fail, but much like Joseph Campbell observes, he followed his bliss.

Richard Feynman

I think that many innovators are often isolated because we’re the ones who are following our bliss; we really don’t care if others have that same bliss.  We are so consumed by that, that we follow it where it leads us, and we do so in a very innocent, playful way… We are not afraid to fail.

I’ve noticed that there is often a level of audacity and a lack of fear within innovators, but sometimes I wonder if that audacity and lack of fear could frighten general society.

Well, I think there’s a fine line between audacity and madness.

And that depends on what side of the fence you’re on.

Exactly. It also depends upon the cultural times. Because, what Galileo said in his time [that the earth and planets revolve around the sun] was not just audacious, it was threatening.

To the church, absolutely.

In a different time and place [the response to] Galileo would have been: “Well, yeah, that’s right. Let’s move on now”.   [Instead of being tried by the Inquisition, found suspect of heresy, forced to recant and spend the rest of his life under house arrest.]  The sad thing is you may have the most brilliant idea in the world, but you will never go anywhere.

Take a look historically at Charles Babbage.  I think he was a brilliant man who had some wonderful ideas; he was very audacious, and yet he’s a tragic figure because he never really understood how to turn his ideas into reality.  [A mathematician, philosopher, inventor and engineer; Babbage originated the idea of a programmable computer.]  That’s what ‘Capital I’ mean to me.  I think that’s why Steve Jobs was so brilliant; it’s not just that he had cool ideas, but he knew how to turn that into an industry.

We have a golden rule that it really doesn’t matter how cool your tech is if nobody’s using it. And it’s a shame because there are some incredible innovations out there, but so many innovators haven’t learned the Job’s magic of marketing.

KimmiC rule: It doesn’t matter how ‘bright the light’ if no one is using it to read.  

I think that’s especially true of our domain of computing systems, because we are ones who are most comfortable – as a gross generalisation – with controlling our machines.  Being able to connect with humans is a very different skill set. To find people who have the ability to do both is very, very challenging indeed.

Zuckerberg is a brilliant programmer, and he had the sense to surround himself with the right people so that he could make those things [Facebook] manifest.  There are probably dozens upon dozens of Zuckerbergs out there, who had similar ideas at the same time, but they didn’t know how to turn them into reality.

The same thing could be said of Tim Berners-Lee: a brilliant man, a nice man…  He was in the right time at the right place and he knew how to push the technology that he was doing.  He was developing things that were in a vast primordial soup of ideas.

Tim Berners-Lee

HyperCard was out; and why didn’t HyperCard succeed while Tim’s work did?  Part of it is technical, part of it just the will of Apple, and part was his [Tim] being in the right place at the right time.

And HyperCard influenced Tim.  Even Bill Atkinson, creator of HyperCard, said: if only he had come up with the notion of being able to link across [Hyper]card decks, then he would have invented the prototypical web.  But, he didn’t do it, he didn’t think about it.

Do you feel that you are ‘in the right time,  at the right place’?

There are times that I think I was born in the wrong century, but I know that if I had been born in the Middle Ages, at my age, I would be long dead.

So, yes, I can say from a very philosophical basis: I am quite content with the time in which I am now living, because I cannot conceive of any other time in which I could have been successful.

I read a quote on Wikipedia… a story you apparently told:

… I pounded the doors at the local IBM sales office until a salesman took pity on me. After we chatted for a while, he handed me a Fortran [manual]. I’m sure he gave it to me thinking, “I’ll never hear from this kid again.” I returned the following week saying, “This is really cool. I’ve read the whole thing and have written a small program. Where can I find a computer?” The fellow, to my delight, found me programming time on an IBM 1130 on weekends and late-evening hours. That was my first programming experience, and I must thank that anonymous IBM salesman for launching my career.”

It sounds like you were quite fortunate to have bumped into someone who was willing to take a chance with you very early on.

I think that’s fair to say.  Though, if it hadn’t been that person, I imagine the universe would have conspired to find me another person, because I was so driven.   Looking backward upon fifty-some years passed, that was the right time and place.  It may have just happened to be that was the right time and guy. But there would have been others.

Grady Presenting

[But] I haven’t told you about the missteps I had and the people who rejected me; we just talk about the successes.  Historians are the ones who write history. Because it’s the history of the winners, we don’t tend to write about the failures.  But even Edison pointed out… I forget the exact quote, but the reason he succeeded so much is he’s done so much and he’s failed; he’s failed more than others on an absolute basis, but he tried more.

“I have not failed. I’ve just found 10,000 ways that won’t work.” ― Thomas A. Edison

What, in your view, gets in the way of the success of innovation?

I think the main thing is the fear of failure. I run across people like Babbage for example… or this gentleman I was mentoring earlier today, who are so fearful that they’re not doing something absolutely perfect, they are afraid to turn it into a reality. I think some innovators are so enamoured with perfection they are afraid to fail and therefore never do anything.

Within this milieu you seem to have had your fingers in many interesting pies.  One that I think must be especially fascinating is your work with the Computer History Museum.  How did you get involved in that?

In a way they came to me.  My interest has been in software, it always has been.  I forget the circumstances but, some years ago, I connected with John Toole, who was the original CEO of the Computer History Museum when it was in the Bay Area. He showed me around the warehouse that they had set aside at Moffett Airfield.

Not long before that they had shipped a lot of the materials from the old computer museum in Boston out to the Bay Area.  Gordon Moore [co-founder and Chairman Emeritus of Intel] and others had said they wanted to make a museum, and they funded that effort.  So, I was around the edges of it in the early days. I thought it was fascinating.

I think the reason it attracted me in the first place, in general, is that I have an interest in the appreciation of history, not just the history of technology, but just the history of humanity.

As I went to the exhibits I remember making the observation to John that I thought their plans were great, but, projecting out to one or two generations, there wasn’t going to be too much that was interesting to display in the museum because, by then, all of the hardware would have shrunk to a tiny size and we’d need microscopes in the exhibits.

“And so, therefore, John”, I said, “what are you doing about preserving the history of software,” which is a very ephemeral thing.

Think about getting the original source code to the [IBM Operating System] 360, or the original source code to Facebook.  Because these are such ephemeral things, people throw them away.  In fact we, IBM, no longer have the original source code to the first version of OS/360; it’s gone.  There are later versions but not the original one.

Facebook Source Code

When Microsoft decided to stop production on the Microsoft Flight Simulator, I mean, this was a ground-breaking program, I wrote off to Ray Ozzie [Microsoft CTO and CTA from 2005 – 2010] and said: “What are you guys going to do with the software? Can we have it?”   He munched around for a while, but I think it’s lost for all time.

We’re in an interesting period of time and my passion, which has led me to the museum, is to say: Now is to time to preserve software!  We don’t know how to present it, we don’t know what to do with it once we have it, but let’s worry about that in future generations and just capture it now.

It’s very similar to what Hollywood has found with a lot of their film stock. A lot of it was just being lost or destroyed, but there is so much cultural history in those records.

Yes, exactly.  So, prior to being on the board, I set up a workshop at the museum looking at the preservation of classic software.  I wrote to 500 of my ‘closest friends’… people ranging from Marvin Minsky [cognitive scientist in the field of AI] to some other developers I knew, and everybody in between, and asked: “What software would you preserve for future generations?”

We came up with a long list.  I think that very idea inspired Len Shustek, who’s the president of the museum, to invite me on to be on the board of trustees.

What is your favourite exhibit in the museum?

I like the [IBM] 1401 reproduction.  They have a couple of 1401 machines and they’ve gotten them running again.  It’s fun to be in a place where there is something dynamic and alive and runs and you can be in the midst of it.  Just walking into the room, you smell old computers; and that’s a pretty cool kind of smell.  So, is the fact it’s running and clacking away.

The 1401

Fred Brooks [IBM software engineer] and I had an interesting discussion once, in which I lamented the fact that our computers make no noise, because – and I know I sound like an old guy, but – I remember you could hear some of the earlier computers I worked on. They were clattering in one way or another, be it their hard drives or their tapes, and you could get a feel for where the program was just by listening.

You can’t do that now with our machines; they are all very, very quiet. So, the 1401 exhibit has this wonderful visceral immersive display, in which you hear it and smell it as it processes.

I’ve actually seen people get a little misty-eyed just thinking about a dial-up tone, and you certainly seem to have some ‘misty memories’ too.  But, let’s look forward now.  What new things do you think may be exhibited in ten years time.

I think that’s the next interesting challenge.  We know how to display physical things, but there aren’t that many more things like old machines, to collect because they are disappearing.

If you go to the exhibits, you’ll see things get smaller and smaller and there is more of an interest in software.  I think the interesting problem for the museum to attempt is: how do we present software to the general public so that we open the curtain on it and show some of the magic and the mystery therein.  I think software can be very beautiful, but how do I explain that to someone who can’t see software. That’s an interesting challenge.

You’ve got to look at it it like an art form.  Source code, especially some of the well-written stuff, looks physically beautiful; forget about what it actually does.  There are many different dimensions you can look at try to get people’s interest.

[Editors Challenge to artists: here is a piece of code I’ve ‘mucked about with’ 

– why not see what code inspires you to create and send us a picture, which we’ll share with our readers, Grady Booch and the Computer History Museum!]

I think it’s very much like modern art because you can look at a bit of an impressionistic painting and you may not get it. Often the reactions are: “My kid could do that kind of thing.”

Well, not exactly; because the more you learn about it, the more you learn how much that painting – or whatever the art form is –  speaks to you and tells you stories.  It requires a little bit of education.

There is a visceral reaction at first to some art but the more you know about it, the more you can appreciate its subtlety.  I think the same is true of software.  We (the museum) have collected the original source code to Mac Paint, which turns out to be a really beautiful piece of software.

I’m using a phrase here that has meaning to me – beautiful – but requires explanation to the general public to say: why is this a beautiful piece of code, why does it look so well-formed?  I think that’s a responsibility we have as insiders to explain and teach that kind of beauty.

What are your thoughts about the emerging trends in Innovation and technology?

Well, the web has been an amazing multiplier, and yet at the same time it’s also increased the noise.  Therefore, the ability to find the real gems in the midst of all this madness is increasingly challenging.  For example, with the computing project  [COMPUTING: The Human Experience] we’ve done, we crowdsourced some initial seed funding for our work.

We could not have done this in the past without something like the web.  We put this appeal out to the world and it gave us access to people, otherwise we could not have done it.  I think the web has produced an amazing primordial soup of ideas into which we can tap; and that is so game-changing in so many ways.  That’s probably the biggest thing. [You can contribute to and volunteer for the project here.]

The web has changed everything; and those who don’t keep up are doomed to be buggy web producers.

Yes, exactly.  Or companies like Kodak.

I had the opportunity to speak to Kodak’s developers about 15 years ago.  It was a small group of people who were in the computer side of Kodak, and I remember saying to them: “Look guys, the future of Kodak is in your hands… so, what are you going to do about it?”

I Tweeted about it not too long ago with a sort of “I told you so.”  And yet, I don’t know whether or not it was inevitable.  It could be the case that some businesses simply die because they just don’t make sense any more.

And they should die sometimes.  But I think early IBM was a good example of a company that understood what business it was in.  I don’t think Kodak really understood what business it was in, towards the end, and that’s what killed it.

I agree, very much so.

Some web business models are founded on the idea that a company has a right to use and profit from an individuals data and personal information… What are your thoughts on that? Do you think that that’s a business model that’s sustainable? I believe that the general public is wising up to this very quickly and are soon going to expect some recompense from the use of their data.

I think there is a local issue and there is global issue that is even harder to tackle.  In the case of the Facebooks and the Twitters of the world, the reality is when I subscribe to those services, I do have a choice – I can chose whether or not to use them.  And, by the very fact that I’m using those services means I am giving up something in the process.

So, why should I be outraged if those companies are using my data, because I’m getting those services for free.  It seems like a reasonable exchange here, and I, as an adult, have the responsibility of choice.  Where it becomes nasty is when I no longer have choice; when that choice is taken away from me.  That’s when it becomes outrageous: when my data is being used beyond my control [in a way] that I did not expect.

I think that will sort itself over time; capitalism has a wonderful way of sorting things.  It’s also the case that we have a generation behind the three of us who are growing up, if not born, digital.  They have a very different sense of privacy, so, I’m not so concerned about it. We have lots of ‘heat and smoke’ but it will resolve itself.

What I find curious is that the ‘heat and smoke’ and discussions are hardly any different from what was initially said about telephones or, for that matter, the printing of the book.  Look at some histories of how phones were brought into the marketplace and you’ll find almost identical arguments to those that are going on today.

I trust the human spirit and the way capitalism works to find a way.  What’s more challenging is the larger issue, and that is the reality that there are connections that can be made in the presence of this data that are simply beyond anybody’s control.

I may choose to share some information on a social media source, or I may use a credit card or whatever, but the very act of participating in the modern society leaves behind a trail of digital detritus.  And I can’t stop that unless I choose to stop participating in the modern world.

I think this is a case where we’ll have politicians do some profoundly stupid things, and we’ll see lots of interesting cases around it.  But, we’ll get used to it.  I mean, people didn’t like the idea of putting their money in a bank for God’s sake, and we got used to it; I think the same thing will happen.

You brought up the Millennials – the digitised generation. What insights would you give them in being game-changers?”

Does any young adult ever want the advice of their elders?

I didn’t ask if they wanted it… 🙂

You know… I think, we laugh about it, but the reality is – and I think Jobs said it well: “Death is a wonderful invention because it allows us to get out of the way and let the next generation find their own way.”  I’m comforted by that; I find great peace in that notion.  They need to have the opportunity to fail and find their own way.  If I were born a Millennial, I’d be growing up in an environment that’s vastly different than mine.

Though, in the end, we are all born, we all die, and we all live a human experience in various ways, there are common threads there… the stories are the same for all of us.  I think those are the kinds of things that are passed on from generation to generation, but everything else is details.

I would not be surprised if the structuring of their brain is different to ours.  I’ve been talking to guys that are 10 – 15 years younger than me, and the ability to hold their train of thought over weeks or months – when you’re doing some serious development or research – they seem to find that extremely difficult.  So, I wonder if we’ll see any really big innovations coming through from those generations.

You could claim that it’s not just the web that’s done that, but it’s back to Sesame Street and the notion of bright, shiny objects that are in and out of our view in a very short time frame.  Certainly I think a case can be made that our brains are changing; we are co-evolving with computing – we truly are.

But, at the same time, throw me in the woods and I couldn’t find my way out of it easily; I can’t track myself well, I can’t tell you what things are good to eat and what things aren’t.  Those are survival skill that someone would have needed to have had a century or two ago.  So, my brain has changed in that regard, just as the Millennials’ brains are changing. Is it a good thing? Is it a bad thing? I’m not at a point to judge it, but it is a thing.

End of Part Two.  Part Three will be published next week – sign up for the blog and it will be delivered directly to your inbox!

You can learn more about Grady via the COMPUTING: The Human Experience website, Grady’s blog and his Twitter feed. Be sure to keep your eye on the COMPUTING: The Human Experience YouTube channel where Grady’s lecture series will be posted.

[Kim, Michael and Grady Skyped from their homes in Sydney and Hawaii.]

Antics with Semantics: The Innovation Interview with Semantics Pioneer, Ora Lassila

Wanting to speak to someone, both interesting and inspiring, about the Semantic Web and Innovation, Ora Lassila, an Advisory Board Member of the World Wide Web Consortium (W3C) as well as Senior Architect and Technology Strategist for Nokia‘s Location and Commerce Unit, was the obvious ‘go to guy’.

A large part of Ora’s career has been focussed on  the Semantic Web as it applies to mobile and ubiquitous computing at the Nokia Research Center (NRC), where he, among many things, authored ‘Wilbur’, the NRC’s Semantic Web toolkit.   As impressive as that is, as I did my research, finding out more about Ora, the more fascinating he, and his career, became to me.

Ora is one of the originators of the Semantic Web, having been working within the domain since 1996.  He is the co-author (with Tim Berners-Lee and James Hendler) of the, to date, most cited paper in the field, ‘The Semantic Web’.  Ora even worked on the knowledge representation system ‘SCAM’,  which, in 1999, flew on a NASA Deep Space 1 probe.

Leading up to our attendance and presentation at the Berlin Semantic Tech and Business Conference, Michael– the true ‘tech head’ of KimmiC – and I were extremely pleased that Ora, ‘the Mac Daddy’ of the Semantic Web, gave us so much of his time.   I hope you find our conversation with him as interesting as we did!

[I’ve italicised Michael’s questions to Ora so you are able to differentiate between us – though, I think it will become obvious as you read – lol!]

Ora Lassila (photo credit: Grace Lassila)

Ora Lassila: Capital I Interview Series – Number 13

Lets start out by talking about Innovation in general, and we’ll move on to the Semantic Web as we go along.   As this is the Innovation Interview Series, the ‘baseline’ question is always: how do you define Innovation?

Good question.  I think many people do not make a clear distinction between ‘innovation’ and ‘invention’.

To me, ‘innovation’ is something that not only includes some new idea or ideas, but also encompasses the deployment and adoption of such.  You can invent clever new things, but if you don’t figure out how to get people to use those new things, you have fallen short of the mark.

How essential has innovation been in your career to date; and how important do you envisage it being, going forward?

It has been important.  A big part of my professional career was spent in a corporate research lab, where inventing new things was less of a challenge than getting these inventions ‘transferred’ to those parts of the corporation that had more capability in promoting their adoption and deployment.

That said, I have learned that ‘technology transfer’ is not always about taking concrete pieces of technology, software for example, and handing them over to someone else for productization.  Sometimes the transfer is more ‘insidious’ and involves influencing how people in your organisation – or outside your organisation – think and see the world.

I would claim that some of my early work on the Semantic Web absolutely fits this definition.  So writing, publishing and talking all constitute viable means.  Also, we should not forget that people need to be inspired.  You cannot just tell them what to do, instead, they have to want to do it.

What do you think are the main barriers to the success of innovation?

I am not kidding when I say that the absolute biggest obstacle is communication.  That is, we should learn to communicate our ideas better to be able to convince people and to inspire them.  I have much to learn in this area.

Who and what inspires you? Where do you look for inspiration?

I have no good or definite answer for that.  When I was younger I was really inspired by the Spanish aviation pioneer Juan de la Cierva whose simple yet radical idea about aircraft – the ‘autogiro’ – paved the way for the adoption of helicopters.  And yet, one might argue that, in many ways helicopters are a far more complicated and complex technology than de la Cierva’s original invention.

Juan de la Cierva y Codorníu, 1st Count of De La Cierva

I am inspired by simplicity… I strive to create and design things that are simple, or at least not any more complicated than necessary.

What are, in your view, the current emerging critical trends in Innovation and technology?

I like openness, things like open-source software as well as Open Access and sharing of data as part of the scientific process.  I am hoping we see a fundamental change in how research is done.  In many ways we have progressed to a point where many problems are so complex that they are beyond a single researcher’s or research group’s capacity and capability to tackle.

Also, on the topic of openness, I like some of the recent developments in open government, e-Government, and such.

And what are some of the coolest mobile technologies you’re seeing launched? 

I am much enamoured with the idea that mobile technologies – particularly via the use of GPS, etc. – ‘ground’ many services to the physical world.  There are many uses for location information, uses that help me in my everyday life.

Furthermore, by making the mobile device better understand the current ‘context’, not only geographically but also by making use of other observations about the physical world (movement, sound, etc.), we can make applications and services better for users.

Do you think we will have a ‘meshed up’ world that effectively bypasses the stranglehold telcos have on infrastructure?

I don’t necessarily agree that the telcos have a ‘stranglehold’.   They provide an important service and a critical investment in an infrastructure I don’t really see us living without.

But we need things like ‘net neutrality’ to make sure that this infrastructure really serves people in an open and non-discriminatory way.  in this regard I am also concerned about more recent legislative attempts [SOPA, PIPA, ACTA] that (perhaps unintentionally) will hurt the overall technical function of the Internet.

It seems that current Web based business models are founded on the idea that businesses have the right to record everything about users/consumers and profit from this information.  Do you think this is a sustainable business model, or do you think the user/consumer will start to think that they, and their data, is worth something and begin to demand recompense of some sort?

There are very few fundamentally different, viable, business models on the Web, so I can see that businesses would want to cash in on user data.  It is only a matter of time before the consumers ‘wise up’ and understand the value of their own data.  Personally I think we should aim at ‘business arrangements’ where all parties benefit.  This includes concrete benefits to the user, perhaps in a way where the user is a bona fide business partner rather than just someone we collect data about.

It is important to understand that what’s at stake here is not only how some user data could be monetized, it is also about users’ privacy.  Luckily I work for an organisation [Nokia] that takes consumer privacy very seriously.

You’ve got a fascinating history, and seem to have gotten into the Semantic Web at the very beginning.

The very, very beginning, yes.  I think I can argue that I’ve been doing this longer than the term has actually existed.

In ’96 I went to work at MIT…  I’d just been hired by Nokia, and they wanted to send somebody to MIT as a kind of visiting faculty member.   So, I worked in Tim Berners-Lee’s team, and one day he asked me what I thought was wrong with the web.

Tim Berners-Lee

Just a small question.

Yeah, not intimidating at all.

I said: “My hope has been to be able to build,” – what then would have been called agents, autonomous agents – and I said: “I can’t really do that because the web was built for humans and human consumption.  I would really, really like to see a web that was more amenable for consumption by automated systems.”

And he [Berners-Lee] said: “Yeah, that’s it! Now, how do we fix that?”

And I went: “Well, how about we try knowledge representation and apply that to web technologies.”  Because knowledge representation is a branch of artificial intelligence that has a long history of taking information and representing it in such a way that you can reason about it then draw conclusions from it… things like that.  We agreed that I would look into that, and that’s really how I got into all this.

Of course I had worked on various projects before that, that involved ontologies and knowledge representation, it just wasn’t done on the web.   The big reason being that the web had not really been invented yet.

There was Cyc and some other AI [Artificial Intelligence] things before that… 

Cyc is a very good example of an attempt to build a very large ontology that would encompass common sense knowledge.  But there are many examples of systems that used ontologies in one way or another for narrower domains.  Cyc was an overly ambitious project, in the sense that they really wanted to cover a lot of ground in terms of human knowledge.

I had worked on several projects in the past that applied ontologies to things like planning industrial production, or planning logistics.  So, the question really was, could you build a model of the world that was rich enough and precise enough that a system could use that knowledge to create plans for various things.  In my case those were plans for either how to run industrial production, or large fleets of logistics’ resources.

You were a long, long way in front of everybody else… at least ten years.  It’s incredible!

One might argue too far ahead.

I think at that time most people were just trying to come to grips with basic HTTP and web servers.  If you look at the vested interests, especially of software providers at that time… I guess it wasn’t really the right timing. But I think that time is coming now.

Yeah, I think we’re in a better position now and we’ve certainly seen a lot of adoption of Semantic Web technologies in the last few years.

I think elements of semantic are brilliant.   RDF, for example, is one of the smartest ways I’ve ever seen of describing something.  You can’t break the way semantics talks about something, whereas you can break the interpretation easily in XML.

I start to lose traction with it when it gets towards ontologies.  Do you think that ‘splitting the message’ would help with adoption?  For instance, you can use ontologies, but there is also a part of semantics which is brilliant for just doing ‘business as usual’?

I think there is a fairly broad spectrum of possible ways of making use of this technology.  I’m sure you’ve seen diagrams of the so called layer cake, with the different technologies layered on top of one another.

A Semantic Web Stack (layer cake) [image created by Tim Berners-Lee

I think that it’s up to you to decide how far up that layered structure you want to go.  There are a lot of applications where very simple use of just some of the most basic technologies will give you a lot of benefit.  And then there are other problems where you may actually want to separate a lot of the understanding of your domain from your actual executing code…  for those kinds of things, encapsulating that knowledge in the form of a potentially very complex ontology may be a good way to go.

My issue with ontologies is exactly the same issue I have with the current enterprise software providers… If you talk about mass adoption, as opposed to just specific domain adoption, for every extra entity – be it a class or data table – you decrease your adoption exponentially.   And, once you go up to higher levels, you shouldn’t assume you’re the only person that has a valid way of looking at the world, though you may be using the same data.  I think we’re saying the same thing…

Absolutely.  The interesting thing to say about the current enterprise software providers, I think, is that they have one model of the way to look at the world.   There are cases where companies have had to change the way they do business in order to adopt the enterprise software [currently available].

You have two choices: you either do it their way or else you spend a few million bucks and you do it their way anyhow.

I think that there is a possibility, with these Semantic Web technologies, of getting into more flexible uses of information and I kind of like that idea.

Over the last few years I’ve become increasingly interested in our ability to share information.  When you start talking about sharing it becomes really dangerous to have very complex, strictly defined semantics.  Because, like you said, other people might have a different interpretation of things.

But you want to nail some things down.  Understanding something about [the] information would give you a baseline for interoperating.  And then, you could do ‘better’ interoperation if you had a better definition of the meaning of the information.

I agree with you about understanding information.  But I think where most things fall to pieces – and this is also looking at business model languages and stuff – as soon as you get anywhere near processes with that information, it goes to hell pretty quickly. 

Exactly.  I spent a few years, at the beginning of the previous decade, working on a large Semantic Web research program funded by DARPA [Defense Advanced Research Projects Agency].  I was part of an effort to see if we could use ontological technologies to model web services.

Is that DAML and stuff like that?

Exactly; DAML, and DAML-S for services.  We very quickly got into process modeling; and those kinds of things get very difficult…

Very quickly.

Absolutely.  I think that’s the thing that still needs work.

The traditional approach to anything process-oriented just doesn’t work unless you have very tight coupling and a very controlled domain.  But I think there are a lot of different ways of trying to solve the same problem without having to get to that level.

I think that one of the things that is missing from the whole Semantic Web collection of specifications is this notion of action… a notion of behaviour.  It’s hard to model, but I think that we ought to work on that some more.

We [KimmiC/FlatWorld] have taken a more hybrid approach, so we use things like REST architecture, and a lot of stuff from the business world, in terms of authentication and authorisation. 

Sure.  I’m not in any way advocating the use of the WS_* collection of technologies. I’m not a big fan of those.

I’ve looked at all the SOAP stuff and there are a lot of problems… like business process deployment.  It is a nightmare to deploy these technologies.  It’s even more of a nightmare to load balance them.

Right.

Essentially, if you’re looking for dynamic relationships – be it in business or whatever – they’re just useless for that sort of thing.  They’re always designed around having control of a large domain space; this is especially true when it comes to deployment of applications.  I just think they’ve missed the point. 

I think the web is the best example of a redundant, massively-distributed application; and we need to look at it more as, “That’s the model,” and we have to work with it.

Absolutely.  I think that for 20 years there have been discussions about these sorts of ad hoc enterprises, or collections of smaller companies, being able to very quickly orchestrate themselves around a particular mission [purpose].  But I think that these technologies, just like you said, are probably not the right answer.

When you wrote your 2009 position paper you noted that rather than languages, the  biggest issues or problems facing the uptake of the Semantic Web were 1. Selling the idea; and 2.  A decent user interface.

Why did you feel that was the case then; and, has your opinion changed regarding these issues in the two+ years since you wrote your paper? 

Semantic Web technologies are well suited to situations where you cannot necessarily anticipate everything – say, about the conditions and context in which an application is used, or which kind of data an application might have available to it.  It is like saying that this is a technology for problems we are yet to articulate.  Sounds like a joke, but it isn’t, and the problem in ‘selling’ Semantic Web technologies is often about the fact that once a problem has been clearly articulated, there are many possible technologies that can be used to solve it.

The issue I have with user interfaces and the user experience is the following: Semantic Web technologies – or more generally, ‘ontological’ technologies – give us a way to represent information in a very expressive manner… that is, we can have rich models and representations of the world.  I feel that user interface technology has a hard time matching this expressiveness.  This issue is related to what I said earlier about not being able to anticipate all future situations; writing software that can handle unanticipated situations is hard.

All that said, I don’t like the term ‘Semantic Web applications’.  Users shouldn’t have to care, or need to know, that Semantic Web technologies were used.  These are just useful things in our toolbox when developing applications and services.

What are the key challenges that have to be solved to bring those two problems together?

I am really looking for new programming models and ways to add flexibility.  This is not only a technical problem, we also need to change how people think about software and application development.  I have no silver bullets here.

How do you see applications developing in the next few years – compared to the current environment – as you have mention we have to shift our minds from an application that ‘owns and controls’ it’s own data rather than simply interacting with data?

I think, again, this is about changing how people think about application development.  And, more specifically, I would like to see a shift towards data that carries with it some definition of its semantics.

This was one of the key ideas of the Semantic Web, that you could take some data, and if you did not understand it, there would be ‘clues’ in the data itself as to where to go to find what that data means.

As I see it, the semantics of some piece of data either come from the relationship this data has with other data – including some declarative, ‘machine-interpretable’ definition of this data, for example, an ontology – or are ‘hard-wired’ in the software that processes the data.  In my mind, the less we have the latter, and the more we have the former, the better.

In previous interviews you’ve noted that you feel users should have a say “in how they view  information.”  Do you think that users should become involved in making the semantic web more ‘usable’? And if so, how?

I think users should demand more.  There needs to be a clear ‘market need’ for more flexible ways of interacting with information.  User experience is a challenge.

On this topic, I also want to point out how unhappy I am with the modern notion of an ‘app’.  Many apps I have seen tend to merely encapsulate information that would be much better offered through the Web, allowing inter-linking of different content, etc. It kind of goes with what I said earlier about openness…

There’s a lot of guys saying they can plug two systems together easily, but it almost always means at the data level.   It doesn’t really work once you start applying context on top of it.

I’d like to see a middle ground where we have partial interoperability between systems, because that’s how humans interact.

That’s something we’re looking at as well.  I view it like this: when I go through Europe, I can speak a little bit of German, a little bit of French. I’m not very good, but I have to have a minimal level of semantic understanding to get what I want: to get a beer.  I don’t have to understand the language completely, just enough, in context, to act on it.

Speaking of acting on things… Ora, where are you going with semantics in the future?

That’s a good question. Right now I’m working on some problems of big data analytics.

With semantics?

Nokia is investing in large-scale analytics, so I’m in the middle of that right now.

I’m currently looking at how to tackle the problem of how to bootstrap behaviour.  Behaviour and notions of action are not well-tackled in the space of the Semantic Web, and I’d really like to get into bringing two information systems in contact with one another, and have them figure out how to interoperate.

That’s very ambitious.

Right.  And I’m not entirely sure if people understand that that’s an important question to tackle.

Oh, it’s an important question to tackle; it’s just more a question of… Again, you’re very far ahead of the game.

Well, I think that today, if you want to make systems A and B interoperate, it’s usually a large engineering undertaking.  So, it’s directly related to the question of separating information from applications…  you could pick the applications you like and take the information that you’re interested in and make something happen.  In terms of interoperating systems, right now we have a situation where we either have full interoperability, or we have nothing… we have no middle ground.

You can learn more about Ora via his website, blog and  Twitter feed.

[Kim, Michael and Ora Skyped from their homes in Boston and Sydney.]

[This interview has been translated into the Serbo-Croatian language by Jovana Milutinovich of Webhostinggeeks.com]

Growing the Culture of Disruption: A chat with Linda Bernardi, a Most Personable Provocateur

Linda Bernardi, author of ‘Provoke: Why the Global Culture of Disruption is the Only Hope for Innovation, is undoubtably one the most personable provocateurs I’ve ever had the pleasure of speaking with.  The fact that she is as inspiring as she is interesting is a bonus.  Once I read her insightful and thought provoking book, published in November of 2011, I knew I wanted her to launch the 2012 Season of the Innovation Interview Series.

Linda wears an wide variety of hats, she is: CEO of StraTerra Partners, a technology strategy consulting company focussed on new tech adoption; an an early-stage technology Angel Investor in the US, Europe and India; and a board member for several commercial and not-for-profit organizations.  Her work with the Bernardi Leadership Institute sees her training in large enterprises and academia as well as engaging entrepreneurs internationally in Innovation Based Leadership™.  If that weren’t enough, ConnecTerra, the company she founded in 2001, provides RFID tech to large enterprise IT.  All of this underlines that Linda knows what she’s talking about when it comes to ‘Capital I’ Innovation – and yet, as engaging as all of that is, none of it is why I was so determined to interview her for this series.  

The fact is, with all those feathers in her cap, Linda now also wears the hat of an author, and it is for that reason – once I had read Provoke – that I sought her out.

Linda Bernardi: Capital I Interview Series – Number 10

Throughout my reading of ‘Provoke‘, I found myself talking out loud and having a dialogue with the book, “Yeah, that’s right!”… “I know!” … “I’ve thought that for years!”  But, to Linda’s credit, I also learned a great deal, and found myself rethinking certain ‘givens’, which perhaps aren’t given any longer.  Credit where credit is due – I recommend Provoke to anyone interested in moving the economy, especially the economy of Innovation, forward.

Throughout this interview you will find ‘snippets’ from the book.  I hope they inspire you to purchase a copy and dive into your place in the ‘Culture of Disruption’ [CofD] that Linda opens to her readers.

“You are already part of the Culture of Disruption. Just by reading this book, you’ve become a disruptive force.”

Congratulations on writing such an engaging and insightful book Linda.  Provoke prompts readers to ask themselves, “What can I do to become part of, enhance, enrich and ensure a successful Culture of Disruption,” be that in their school or business, and regardless of whether that business is a startup or an entrenched, global corporation. You’ve defined the Culture of Disruption as:

“…the culture that invites and nurtures ideas and ways of thinking that continually disrupt convention wisdom and legacy models.  A CofD needs to be part of any organization looking to innovate.”

Added to that, you have made clear that change is inevitably uncomfortable, at least initially, but Innovation is the responsibility of everyone involved in the ecosystem.  This ecosystem encompasses entrepreneurs and employees, investors and Board of Directors, even academia and the media and, perhaps most importantly, Consumers – who are the market.  As you see it, working together – collaborating – they can create an unstoppable Culture of Disruption.

You refer to Collaboration a great deal in Provoke.  Why is collaboration so important in the Culture of Disruption?

Collaboration has very broad ramifications.  Part of what I hope to do with Provoke is explain the different constituents in the ecosystem… and illustrate how things are changing.  As things get more democratised and open, by nature they become more collaborative, and this includes decision making… even very fundamental decisions such as strategic acquisitions, product directions and market plans.

The process of making these decisions will become much more collaborative within companies.  It will also become more collaborative with the consumer – the market component, because the market now has an immediate voice regarding anything that company does. Decisions that used to be non-collaborative, where a company produced something for the market and the market had to take it or leave it, are now commented upon and can be broken, or not, based on the input of the market. Social media enables bi-directional communication.

These forces, that we never had in the past, allow uni-directional decision making, development, and communication; it’s becoming very bi-directional and collaborative.  For the first time we’re embracing intellectual development on all levels and planning strategic development at a collaborative, global level.  We’re respecting, or learning to respect, the power within individuals – whether they’re within a company or collaborating with the company – and the market.

In opening up to collaboration in such a social way, things are moving very quickly.  Do you see the CofD as evolutionary or revolutionary?

Parts of it are evolutionary, because it would be impossible to say that everybody has to stop what they’re doing and completely change tracks.  If it’s a big company serving tens of millions of consumers, it’s inconceivable that they’d immediately stop what they’re doing, abandon the past, and develop anew. On the other hand, certain Cultures of Disruption can be revolutionary, because they don’t have a legacy burden or have to service a huge market.

“The 3 Is–Inspiration, Impact and Innovation in the CofD.”

To that extent, you see companies like LinkedIn, Facebook and Twitter, new generation companies that can evolve their business model immediately.  Because everything is very dynamic, smaller companies have the ability to be much more agile and evolve very rapidly.  Added to that, the bigger the company becomes, the less likely they are to reward risk and innovation.

Do you think that innovation is always risky?

Actually, I don’t think it is at all.  Innovation, fundamentally, is looking at something that doesn’t exist, or a new way of doing something… creating some new possibilities.  Innovation should be inherent in anything we’re doing.  [Unfortunately] bigger companies tend to think they’re not entrepreneurs.

When I give a lecture to a company where there may be a 100,000 employees, a common [theme] that comes up is, “I’m just an employee; I’m not an entrepreneur.  This stuff does not apply to me.  I can’t bring about change.”

“… a formula for figuring out the odds that a given acquisition will succeed, based on five conditions that exist when the ball gets rolling: 1. Purpose; 2. Plan; 3. Personality; 4. Players; and 5. Panic.”

Well, anyone and anything they do can be innovative.  That is why companies hire them; why companies go to the best universities and hire the best people.  They bring them in because they want their talent.  Their talent means they have brilliant ways of solving problems.  That is innovation!

But something happens in their journey, and within a year or two, these same individuals get frustrated and leave.  It’s ironic because they go off and doing the most fantastic things, and when you ask them, “Why didn’t you do it while you were at work?” They come back to their line of thinking: “I’m not supposed to innovate. I’m not entrepreneurial. I’m an employee.”

Provoke is trying to break that mould… to say that innovation CAN emanate from within.  One of the reasons that I wrote Provoke is to change the lethargic behaviour that we see in big corporations and conglomerates. Most of my clients have one-hundred-and-fifty to two-hundred thousand employees.

The Bernardi Leadership Institute

It is frightening when I enter these companies and see an attitude, which is much more about: “I’m here to do a job and collect a pay check,” versus: “I’m here because I’m super-bright.  I’m here because I have enormous talent and enormous capacity.”  I think they’re only operating at anywhere between 10 to 15 percent of their intellectual capacity in these types of companies.

There are reasons for this.  Often there are a lot of barriers – which I talk about in Provoke –  that prohibit people from being innovative.  After I give a lecture people will approach me and start sharing their stories.  They inevitably revolve around not having a supportive manager, having a leadership that’s disconnected, a system that does not reward risk taking or innovative thinking, and failed systems of capturing innovation from within.

“Smart leaders should be looking at a managing method I call “the baton”: orchestrating diverse groups, allowing expression within the boundaries of an overall plan, and making work emotionally appealing and creative…”

Think about it;  if you have one-hundred thousand employees… if one percent of those people – just one thousand people – were to have one idea a year… that’s one thousand ideas per year!  Yet if you look at the amount of innovation that’s actually captured within companies, it’s maybe three to four ideas per year.

To me, as an ex-CEO, it’s as if these company leaders are willing to bypass their most incredible source of innovation.  Often they think, “Maybe I need to make an acquisition to do an innovation.”  A lot of times the Innovation acquisitions that are made could have been accomplished within companies, but employees within the company were never consulted.  Imagine their motivation level.  The level of inspiration drops proportionally the less people are involved in innovation.

We have to disrupt this model, cultivate and inspire talent, and bring creative thinking out.  Currently I believe this is incredibly dormant, both within the US as well as globally.  Instead, it’s the big company model that prevails, on in which, for some reason, expressiveness and innovation go unrewarded and are even discouraged.

You must meet, at least initially, a great deal of scepticism in these larger corporations.

Well, scepticism is a lot easier than innovation, isn’t it?  In fact, in the five stages of dealing with disruption, scepticism is one of the first stages.  When cloud computing came out, it was very easy for people to be sceptical: “It’s not going to work.  I’m not going to use it as my corporate enterprise system.  It’s going to fail.  Nobody is going to want it.  There are security issues.”  The list went on and on.

When Apple disrupted the music industry and brought out the iPod, it was an incredible revolution, it redefined the entire possibility landscape, it redefined distribution of music.  And then it came up with the iPhone and redefined telcos.

My big clients in Europe all said: “Well, a computer company that’s a music company… they are never going to make it as a phone company.  It’s not possible!  We’re a phone company!”

And when Apple redefined the iPad the sceptics said: “No one is going to walk around with an iPad, and an iPhone, and a MacBook.”  And what do you think are the most three prevalent devices at any meeting I go to?  Those exact three devices!  Then the sceptics said: “Nobody is going to abandon their iPhone just to get a new iPhone,” yet everybody does.

I’m so glad you brought up this question because there is so much scepticism and misunderstanding around innovation.  Innovation – or disruption – doesn’t mean just coming up with a flaky idea, going off and doing something new without considering what the ramifications are.

Innovation (whether it’s  by a small, medium or large company or an individual) looks at the possibility of developing something that doesn’t exist, or expanding on something that exists, and disrupting the model.

“Here are a few tips:

  • Try something that seems crazy.
  • Solve a problem.
  • Observe everything.
  • Ignore the naysayers.
  • Have a strategic plan and execute it
  • Expect more of yourself”

Look at companies like Kodak, which I talk about in Provoke, who owned the entire photography space, and I think about the myriad number of ways that Kodak could have taken digital photography and owned that space… It could have been the hub, the platform for all digital photos.  Or, look at Blackberry, that owned the business of communication smart devices…

I think there are very few companies that are willing to take the broad risk of blatantly innovating in the face of scepticism, while understanding the heightened level of gratification they have to give to the consumer.

Speaking of the consumer, do you see a widening or lessening of the generation gap – between those of a ‘certain’ generation and those part of what you deem Generation I (the Generation of Innovation)?

I’m delighted by how intelligent the consumer is.  Something magnificent is happening today, the like of which we’ve never seen before.  When the first personal computers came out people that were over a certain age, who had never dealt with a computer, never learned.  There was a very distinct gap.

Somehow, in the last five to ten years, with the help of social media, that gap is being bridged to the point where grandparents are revelling in the use of Skype… they know how to use their iPhones.  They feel a part of it.   And that’s fantastic!  A nine-month-old can take an iPad and play a game.  Of course there is a broad range of technical capabilities, but the generational gap is becoming less and less relevant.  In my view, it’s because innovation is becoming more practical and end-user-oriented.

We’re developing things with the idea that the mass of people should be able to use them; things are becoming simpler to consume.  With computers in the past, the art lay in  buying the computer, loading the operating system, figuring out what program to buy, going through the heroic task of installing it and figuring out how to use it.  Only a very small percentage of the population could actually do that.

It’s very different today with the ‘www.anything/anytime’ model, which allows anyone access to anything.  The art is the use rather than the technical prowess to be ‘able’ to use; and that has really diminished the generational gap.

Unfortunately I think a great many technologists miss the fact that, regardless of how ‘smart’ their technology is, if people aren’t using it, it just won’t matter.

Exactly.  If we ask the question, “what makes certain innovation distinct?” it is when you develop something that people use.  In Provoke I discuss each of the constituents within the ecosystem of disruption, which is the enabling body of the Culture of Disruption.  These include the leadership, the board, the investor and the employee.

I then talk about the market and its power, because it’s the market that is totally redefining advertising and marketing.  The market has a tremendous impact.  Look at what happened at Netflix within span of few days.  The company came up with a new business plan, people revolted and called it back.  Imagine if we could have done that  ten years ago with hybrid cars.

“DISRUPTION = INNOVATION = EXCELLENCE”

When Innovation becomes practical and usable, it redefines everything.  And the beauty of it is that, sooner or later, change is inevitable.  The sooner you embrace the disruption the better.  If you don’t believe in disruption, you’re fundamentally saying that you believe in nothing changing… companies that think like that are the companies that become extinct.

The business equivalent of the dodo bird.

Exactly.

That leads into my next question: is ego bad for innovation?  As a consumer I have a great amount of ego because I feel, more and more, that I can have an effect – especially if I group together with other consumers.  From my perspective as an innovator, I realise that if we (KimmiC) didn’t have ego, perhaps we wouldn’t be as audacious as we are in deciding that we can change the world.  

On the other hand, perhaps part of the reason that people and companies feel they don’t need to change is the ego they have invested in their current offerings.  

What are your thoughts?

It’s a very important question.  If we were to define ego, I think competence, belief, passion and drive are necessary attributes… they’re critical for any of us to do anything significant.  If you didn’t have those you wouldn’t be able to do what you believe you can do.  You wouldn’t have the passion or the drive to do it; or the fundamental belief, as an entrepreneur, that: “I know this is risky but I really believe in the fundamental outcome. I’m going to do it and very little is going to stop me if anything”.

On the other hand there is the misplaced ego, which is ego by virtue of what you’ve been in the past, or what you think you are, or what you think you have to be.  That’s wrong because that completely stifles growth.  Those are the companies (or individuals) that are not innovating, because they believe that if they shatter that ego, everything will fall apart.

It’s necessary to know what you don’t know, to know that you have to learn, and to be eager to learn.  Unfortunately, in the latter group of people (or companies), there’s very little learning or change going on because of their belief: “I’m company X.  I’ve dominated this field.  I own it.  No one can be as good as me.”

At the same time there are some entrepreneurs that can be unfoundedly egotistical.  They believe they know all the answers just because they’ve been successful in the past.  These are the entrepreneurs that believe they’re going to take their social media company public and, suddenly, it’s going to be worth hundreds of billions of dollars.

Those that wear their Google Goggles proudly!

Yeah. Or because they’re 20 years old, they got their Ph.D. at MIT and somebody told them they’re super-bright.  They have expectations, at the age of 23, that they are going to be multi-millionaires on their yacht.

“The comfort and the recklessness born of wildly available capital are actually corrupting innovation. Obstacles and scarcity force us to be better, think creatively, and work harder.”

Silicon Valley, while it’s been enormous in bringing us tremendous innovation, is also the breeding ground for unfounded ego, both in entrepreneurs, corporations and investors.  There are venture capitalists there that believe that, in some sense, they’re God!  One of the things that inspires me about Asia (and some of the other continents) is that they are where Silicon Valley was about 20 years ago.  There’s passion and hunger, but there’s also an incessant energy and drive.  It’s extremely difficult to balance that with unfounded ego.

Talking about Asia brings me back to thinking about scepticism.  I believe, and I think from reading Provoke and speaking to you that you probably have a similar view, that woe betide those who are sceptical of what is going on in Asia.

Right.  One things that got me really worried and started me thinking about writing Provoke was a December 2010, CNN round table of six CEO’s from major companies in Silicon Valley – we’ll leave them unnamed.  For an hour they were being spoke about the prowess of the U.S. versus the global market, and how the U.S. would never lose in the innovation game.

Clearly, they all have operations in various countries outside the U.S. and they definitely, as executives travel there, but the answers they rendered really left me baffled.

“Among giant companies, only Apple and Google seem to have mastered the deliberate act of balancing a youthful, wildly innovative, disruptive organizational ethos with the iron discipline and market focus that produces win after win… What my students and clients want to know is how to capture that same spirit.”

In my role as an investor in companies across the globe, I get a unique opportunity to look at entrepreneurism in various countries at a fundamental level.  For instance, in India there were about 5,100 business plans submitted [to us] last year.  We’re expecting this year to top 7,000.  China, South America, Europe… Talent is everywhere. Genius is everywhere. So to think that it’s going to be in one place is a very dangerous game.

StraTerra Partners

Everything is available everywhere.  So, for companies and the leadership of those companies to sit there and say things like: “We are going to be the leader. Nobody can catch up with us,” really showed enormous blindness.

It could be posited that the current state of the U.S. economy is, at least partially, reflective of that blindness.

Absolutely.  Innovation is collaborative.  Right now, just within my client list, there are three to four million employees. That’s a lot of people.

Just within that group of people, if I can start the process of active thinking and processing, just awakening people to the power that they have, and get them to believe in it…  To believe that an entrepreneur is just somebody that thinks creatively; that they can be, and are, an entrepreneur in what they do.  If the system around them is not designed to listen to them, they should create a system that is.

If enough people think like that… well, how could the leadership of a company resist 50,000 people wanting to express themselves.  What are they going to do? Lay them all off… because they’re thinking creatively?

“People need to feel valuable, creative, inspired.”

Disruption is simply another way to look at something.  Imagine if we’d never experimented… because that by default means disruption.  We would have never had any discovery.  That’s the power of the ‘what if’ culture.  That’s what engineering and science is, it’s about experimentation.  Disruption doesn’t mean disrupting the business, it just means opening up the business to new opportunities.

You mention in Provoke that you’d like an opportunity to rebuild the entire business school paradigm.  How would you change it?

I can actually answer that question with a very concrete answer.  I taught a number of classes last year at the University of Washington, in their MBA program.  It went so well that they’ve now offered me a lecturer position and, starting at end of January, Provoke will be used as a textbook.

It was very interesting to go into a program that’s well-established.  It’s very methodical, like every other MBA program, and I taught it completely differently.  I wanted my audience to participate with me, I wanted the students to think.

I made them  very uncomfortable because I told them I didn’t think they were thinking enough.  At one point I turned around and said I found it completely boring being with them.

I know they’re bright but they were just sitting there waiting for me to teach them things.  I said, “But you know everything!  Let’s talk about how you can change things!”  What ensued were incredible business plans; and I thought: “Oh my God!  They’ve woken up.  Look at what they can do!”

“Professors should pursue corporate relationships, explore creative funding options and worry more about impact and less about tenure.”

This is the first step.  We’re going to incorporate it with formal teaching and, over the course of the year, I hope to expand it to other universities and other business programs.  I really think that business programs have to be completely disrupted and revamped for the new world that we’re entering.

In Provoke you mention your respect for (The Daily Show’s) Jon Stewart.   What is it about Jon that moved you to mention him in particular?

In addition to being a comedian, Jon played a very important role during a very difficult previous [Bush, Cheney] administration.  His was probably the most unbiased and candid voice talking about what was going on in the administration, in congress, in politics in general.

Jon Stewart

Part of the comedy passport that he has allows him to bring things to the foreground and talk about them both in a way that audience connect with.  He is able to make light of very complex things which, frankly, need to be made light of.

It’s ironic, but though he’s a comedian, he’s one of the cleanest source of political news.  That frankness is also needed in discussions about business, Innovation and the Culture of Disruption.

He’s very much a provocateur, as are you.   Like Jon, you don’t seem to have any reticence in voicing your opinion, nor qualms about how you may be perceived in doing this.  That is not necessarily a position that a lot of women are comfortable taking.

I would say that, as we enter 2012, it’s really disturbing to me how few women do what I do.

When I look at technical conferences, and go through the list of keynote speakers, there are no women.  We’re 50% of the workforce, yet we’re not there.  When I sit around an investment table, I don’t have women investors with me.  When I’m on a board of directors, I don’t have women with me.  It’s disturbing to me that in a course of decades, instead of this becoming a non-issue, in fact it’s a real issue in that, women are still uncomfortable taking centre stage.

I’d like to have a lot more women doing what you’re doing – asking tough questions, putting themselves on the map.  But generally women lack a desire for risk.  Women dislike failure and want to play it safe – for a lot of historical reasons – yet women have enormous power.  I’m hoping that, as the world becomes more and more collaborative, more and more women will come to centre stage.

How will you measure the success of Provoke in the Culture of Disruption?

I really believe in the power of the people and I want Provoke to have a role in it.  So, if you ask me: “What would be a measure of success for me in a year?” it is how many people I might have touched with Provoke.  How might I have helped them change their thinking around the inevitability of disruption and their positive role in the Culture of Disruption.  To me, those would be incredible success factors.

“I’m a free thinker, not a Kool-Aid drinker”

If I can just provoke people to think differently, the mathematical combination of possibilities grows infinitely.  I’m very pleased to see  companies – some as large as federal agencies – saying: “You know what? We need to bring in innovative thinking.”  In fact, they’re replacing my “Culture of Disruption” with “Culture of Innovation.”  They’re saying: “You need to help us create our Culture of Innovation.”  Right there, suddenly there’s a positive translation of my dream.  The “Culture of Disruption” has now been translated into the “Culture of Innovation,” or, as you call it, the “Capital I”… which is huge!

Linda is giving a free gift of the first chapter of the Provoke eBook, along with a Culture of Disruption membership card, to any readers of this Innovation Interview who sign up for her monthly ‘Innovation Excellence’ newsletter by clicking this link!!

You can find out more about Linda, Provoke and the Culture of Disruption on her website and blogYou can also connect with Linda on Twitter, Facebook and LinkedIn.

(Kim and Linda Skype’d from their homes in Sydney and Seattle.)