Category Archives: Technology

Golden Nuggets: The ‘Mini Inny’* Innovation Interview with Howard Rheingold

The life and times of Howard Rheingold encapsulate so much of our technological, online world, that at times it seems simpler to list what he hasn’t done, rather than attempt to encapsulate all that he has. That said, I shall attempt to corral a part of his wild ride into a few sentences.

To say that Howard is a freelance journalist is like saying that Steve Jobs sold computers.  Yes, he is, but that goes nowhere near encapsulating his influence over defining social media and virtual communities, which he began detailing in his 1985 work Tools for Thought: The History and Future of Mind Expanding Technology (revised in 2000) and 1993’s  The Virtual Community: Homesteading on the Electronic Frontier.

Howard’s influence and enthusiasm have not slackened, and his writing continues to interest and inform.  This writer, teacher, artist and critic was one of the first to understand and explain the potential of an online and engaged community, in his use of San Francisco’s Whole Earth ‘Lectronic Link (WELL) in 1993.   [The WELL is one of the oldest, continually running online communities.  Founded by Larry Brilliant and Stewart Brand (of Whole Earth Catalog fame), The WELL is currently owned by Salon.com.  An early site for the merging of online and counter culture, WELL was a well known meeting place for fans of the Grateful Dead – Deadheads – in the 1980s and 1990s.]

Howard has continued to be a thought leader in the technology, mobile tech in particular, and what effects it will have on society and individuals.  Net Smart: How to Thrive Online is Howard’s most recent publication [2012].  Within its pages, he shares the answers he has gleaned to questions he has been asking for many years: “how to use social media intelligently, humanely, and above all mindfully.”

Howard RheingoldCapital I Interview Series – Number 16 (A ‘Mini Inny’ Interview)

Among many things, ‘Net Smart: How to Thrive Online’ looks to enable and empower us to manage our social media, rather than allow it to manage us.  How do you manage your social media?

I manage my attention. At the beginning of each day, I write down in a few words my two or three goals for the day — what I want or need to get accomplished. I put the piece of paper on my desk at the periphery of my vision. When my eye catches sight of the paper, later in the day, I ask myself where my attention is pointing and whether the way I am deploying my attention at the moment is helping me achieve my own goals.

At first, the exercise is nothing more than that — training myself to ask myself what I am paying attention to and comparing it to what I have decided I need to pay attention to. Repetition of this process grows a new habit of being self-aware of how I am using online media — mindfulness or metacognition. The best news about information-attention (infotention) training is that any amount of self-awareness of your media practices at all is far better than no awareness of how media are dragging your mind from place to place.

Will our learning to manage our digital presence enable Innovation in digital technology. And if so, how do you see this evolving?

Doug Engelbart, who invented much of personal computing and digital networks, wrote in his 1962 paper “Augmenting Human Intellect,” that humans are self-reprogramming, self-amplifying innovators through our use of “artifacts, methodology, and training.”

The artifacts (personal computers, communication media) have evolved multi-billion-fold since Engelbart’s time, but the language, methodology, and training — the literacy of using these media by billions of people — is evolving more slowly.

Doug Engelbart (image courtesy of New Media Consortium - nme.org)

With virtual communities, smart mobs, and collective intelligence, we’re seeing the beginning of what people are learning to do with our new technologies. Most important is the lowering of barriers to collective action of all kinds: people will be able to organize and act together with others socially, politically, economically in ways, on scales, and in places never before possible.

Will true mastery of our digital presence require an Innovation of our neural networks be it within the context of dreams, meditation, and or awareness?

Dreams and meditation ARE forms of awareness. Knowing how to read and right is a highly trained synchroniztion of different cognitive capabilities. A similar highly trained synchronization of human minds, media, and social objectives requires a more widespread and sophisticated individual awareness of  how minds and media interact. From that awareness, innovations will emerge, just as they did after the literacies of writing, the alphabet, and print spread.

You can learn more about Howard via his website, Tweets and the Rheingold U site.  You can download an introduction to Net Smart here, and purchase a copy on Amazon.

*Mini Innys (mini interviews) are bite-sized interview-lettes.

Reaching Beyond the Sky: Talking Innovation and Tablets with Suneet Tuli

As an advocate for technology users and affordable, Innovative technology for all, I was extremely excited to discover a passion for providing simple internet technology to the billions of people currently unable to afford it, in Suneet Tuli.  Suneet is President and CEO of DataWind, which launched the Aakash low-cost tablet computer in October 2011 in New Delhi.

Dubbed the world’s least expensive tablet, and designed to be provided free of charge to Indian university students, the Aaksash was developed to link India’s  many thousands of colleges and universities in an innovative e-learning program.  I spoke with Suneet just prior to their launch of UbiSlate, the commercial version of their tablet technology.

Suneet Tuli: Capital I Interview Series – Number 15

Firstly, congratulations Suneet.  I think the Aakash initiative is wonderful.

Thank you, we’re having some fun, and it’s an interesting kind of ride.  The requirements, demand and need is huge, and we think that we –  though not just us, but also others who play in the field – are going to end up changing the world for the better.

I wish you much success with that goal!  That certainly was our impetus for starting KimmiC.   I’m the first to admit that it does sound audacious, but I think that audacity is quite necessary when you take on challenges like this.

It really is.   When I talked about it six months ago and said, “this is technology to bring the next billion people on board,” I was seen as being audacious, or ‘reaching a little bit.‘   Today I’m seen a little differently.  It [Aakash] is endorsed by many others now, such as Carl Bildt, the former Prime Minister of Sweden.

At the World Economic Forum in Davos [where Suneet spoke in October 2011] they had a huge billboard of our device and a discussion forum on the impact of it on the world.  I truly believe that not only is ‘internet for the billions’ coming, but it really is going to change the world.

This is a project that started in India, and now there are countries around the world that are implementing similar projects.  In India alone they have put together a Mission Statement saying that they’d like to equip all 220 million students in the country with low cost internet devices.

Apart from India, if you look at the countries that have invited us to talk, and that are looking to put together similar projects, that’s over a 100 million units.  That’s not to say that we would necessarily win all of the projects, but we think that its the dream that’s important.  That, and the fact that governments are implementing this aggressively, will have a snowball effect.

When you say ‘we’, I take it you are referencing your brother Raja and yourself, is that correct?

Yes, this is the third venture he and I have done together.   In each one Raja runs the technology, R&D and manufacturing teams and I look after the sales, marketing and operations.

Raja Singh Tuli

What was you impetus for beginning the project?

We had created a technology that reduces bandwidth consumption and consumption of the internet on standard GPRS mobile networks.  Today there are six billion mobile phone connections and 2 billion internet users – the billions of people within that gap don’t have access to any kind of broadband infrastructure, the only access they have is via standard GPRS mobile networks.

We created a technology, for which we received 18 US patents, that allows us to deliver the internet on those [GPRS] networks.   We can deliver this network with no new infrastructure and for a very low monthly cost – potentially even for free.  This technology is really applicable to our market segment, and gives us the opportunity to pursue our personal mission.

Even though we grew up in Canada, coming from India and seeing what is happening in those markets, we know the difference between the ‘haves’ and ‘have nots’ is the digital divide and the quality of education.  Our belief is that the best way to power a better quality of education is through computers and the internet.

Places like India and Africa will not, in a reasonable period of time, be able to teach enough teachers and professors, and build enough ‘brick and mortar’ universities, to impact enough of the population.  For instance, there’s 350 million people in India that cannot read or write.  Its an outrageously large number, and yet, its so easily solved – at least a dent can be made in it through technology.

Technology has changed so significantly, not just in the last five or ten years, but in the last two years.  Look at products like the iPad.  This is a product that comes without a User Manual.  You take it out of the box and are expected to know that the only button you’re going to press is the one that’s on the device.

Three and four year olds today utilise these kinds of products and play simple games.  Touch screens and graphical interfaces are so powerful, we think that they can really make an impact on delivering a better quality of education.  Based on the technology we’ve created and our personal goals, that better quality of education is something that we can help achieve.

For a number of years I’ve been involved with a charity called Room to Read which builds schools and libraries in the developing world.  The main reason I got involved with it is I truly believe that the best way to create a safer world is to educate people, and empower education and educators. 

I agree with you.  I think that education really can solve all kinds of issues.  I get people who criticise this idea, saying, “Yeah, but in India where so many don’t have access to clean drinking water, isn’t this [focus on education] a waste?”  My response is:  THIS is what is going to help bring clean drinking water.

Education is what is going to enable and empower an individual to bring clean drinking water to his or her village or community.  And, why not other world changing Innovation as well?!

Oh yeah.   Any issue that you can think of I can talk back towards education.

It seems that, along with education, your product is about empowering the user, which I believe is integral to the next jump in technology. 

And it involves the whole ecosystem; for instance, its not just the device, you’ve also got to have  anytime/anywhere internet access.  This is why mobile phone, cellular connectivity is very important.  As well, the technology has got to be affordable and its got to have an open ecosystem for content and apps.

We’ve launched scholarships and competitions in India around content and apps to help encourage students, and others, to create apps and even start their own entrepreneurial journey.  This creates localised innovation.

In every single country we’re working in, we’re pushing for domestic manufacture, because you can’t expect to solve problems if you don’t manufacture locally.  You can’t expect to understand what the problems and solutions are, and drive local innovation, if you’re just going to get cartons full of boxes from somewhere else.  Sitting in Canada and the US we see it – the manufacturing industries here have been devastated, and the skill-set is no longer here.

You mentioned apps and, in one of the pieces I read while doing my research for our chat, I understood that users were not able to load free software onto the tablet.  Was that a correct interpretation?

That’s a common misunderstanding, but one that is not correct at all.  What we’ve done is, instead of using the Google/Android marketplace we’ve used Getjar.  We chose Getjar because it forces all the apps to be free and all active operators to make money purely off of advertising.

This is essential in India since, for instance, on the the Android market, while 80% of the apps are free, 20% of the useful apps are actually paid [for].  Even though its only 99 cents, the problem for my customer is that they have no ability to make online payments.

As an open source operating system, we don’t restrict anyone from installing any apps.  However, we obviously pre-burn in certain apps, from which we generate advertising revenue.  This is important to our full service ecosystem of revenue streams to help drive the cost of hardware down. But, it doesn’t mean you can’t install apps.

So you’re not trying to control the economic and application ecosystem. 

We don’t control it, but we want to earn revenue from it – those are two different concepts.  It’s an open source platform so you can install whatever you want, but we will have five stores on the site – we will have eBook, multimedia, game, apps and educational content stores.  You can go to Getjar and independently load your apps, but we’re going to encourage certain apps and certain environments, which we think are important for our customer base, as we’re positioning the product towards education.  We understand a lot of our devices will end up there, so we need to have an educational app store that can promote educational content.

Building an ecosystem doesn’t mean that we’re restricting open access to it.  We will have a monetary and strategic interest in the apps we promote because they’re in line with how we want our product to be perceived.

I downloaded some slides from a presentation you made last year, and in them you mention your carrier class technology.  Does this tech essentially create and control distribution and interaction with the tablet?

There are two browsers on the device.  One is the standard Android browser, but the difficulty and problem with it is its data consumption.  In the Indian environment this will result in an average of 400 – 500 rupees per month ($10 dollars per month) in data costs.  That is one problem, the second is the slow experience due to how congested the networks are.

On the other side is our browser, which uses our backend proxy acceleration system.  On that system we’re able to deliver the equivalent of unlimited internet access for about $2 dollars [per month] and its significantly faster than what you’d get without it.  The user has a choice of using either one of those solutions, but we believe they’ll choose ours because of the speed and lower amounts of data consumption.

And if they choose to use yours, its your servers that do the actual ‘grunt work’ therefore saving energy – the consumption of energy is by the server rather than the device.

Right.  The result is that you consume a lot less bandwidth, the costs go down, and it’s faster.  We shift the burden away from the client device onto our servers, but again, its their choice which browser they use.

Speaking of choice, why was the name Aakash chosen for the tablet?

The name was chosen by the Indian Human Resources Development Minister, Kapil Sibal, who has education as part of his portfolio.  Aakash means Sky in Hindi, and I believe he meant it in reference to the fact that he wants kids to reach for the sky.

The product that we will launch commercially will be called the UbiSlate and the key differentiator between the two is the mobile network connectivity.  The version the government ordered was built to their tight specifications, which only has wifi connectivity.  Their thinking was that, because they [the government] were providing access on their [college] campuses, that that should be sufficient.

We believe that isn’t’ sufficient, and that you want access everywhere.  You need access beyond the campus, which you will have with the commercial UbiSlate.

So those articles written after that initial testing process of the Aakash, which had somewhat negative responses from the beta users, were judging the technology on a somewhat unfinished, or less than perfect, product.

I think that they were judging us on the specs that IIT-Rajasthan set.  We won a tender that they put out and built [the technology] to the specs that they wanted.  We’ve proposed a different spec product to the government, which now they’ve agreed to conceptually, for the Aakash 2.

The issues they ran into were a lot more than just specs. The National Mission for Education for ICT (NMEICT) has made a great deal of effort over the last few years to create a lot of great digital content – tens of thousands of eBooks, online lectures and virtual labs and things of that nature.  Unfortunately, for the purposes of the trial, that content wan’t integrated into the devices.

The trial was conducted with college and university students in India whose tuition is higher than what we pay in Canada.  So, you know, when the first five hundred [students] walked in to receive their devices, two out of three of them had iPads under their arms.  Now you’re going to give these kids sub-$50 devices without their curriculum integrated onto it… and you’re going to ask their opinions on it…

The feedback we got [from the students] wasn’t a surprise: “its not as fast as playing games on the iPad; its not as cool as the iPad; the network connectivity is spotty, at best, using the university wifi, and it doesn’t have any connectivity beyond that.”  It was a learning curve for all parties.  But, our role was to deliver the product that they [the government] required.

The focus for the government was cost and we were able to deliver to them a cost breakthrough that literally had people’s jaws drop.

It’s not, as I said, the UbiSlate that we’re about to launch.  I believe our performance can, and will, be better judged when we launch that commercially.

UbiSlate

Do you think it fair that your tablets will be judged against products, such as the iPad, which has unlimited budgets and high prices?  And, noting that people may often purchase Apple products due to the cachet of their brand, how valuable do you want your own brand to be?

I don’t want people to have to pay a premium because of a perceived brand.  I want to make our products viable for a person on a $100 per month salary.

While in the West we’ve become accustomed to product positioning where you’ll pay a premium for a brand, in our scenario we’re not looking to maximise price, we’re trying to maximise customers.  In our business model we focus away from hardware margins.  Hardware is the customer acquisition tool, and our intent is to drive hardware costs down as low as we can and, instead, try to generate revenue from network services costs and advertising.  We believe that has the potential to get those billions of people on board.

Have you plans on how are you going to differentiate your tablets from competitors, such as BSNL, who have recently come on the scene?

The big differentiator is the connectivity.   We think that the fatal flaw with a [BSNL] product of that nature is that it doesn’t have mobile connectivity.  And compare [our] 2,500 rupees vs [their] 3,200 rupees [price], not only is [theirs] 30% higher in cost, but its only wifi enabled.

If you look at the Indian environment there are 18 million broadband connections serving only those people that have wifi – and those 18 million are probably the wealthiest people in India.

And they probably have an iPad.

They have.  And, they can afford wifi and products which are at a multiple of this price.  They’re not going to purchase these [low cost] products.  The question is what connectivity does the guy who can afford only 2,500 rupees have?  The only connectivity he can afford is often a mobile network.  BSNL has launched three devices, and the only one with mobile connectivity is three times the price.

And finally, are you looking to partner/joint venture with anyone to broaden the capability of your tablets?

We are.  We have a number of deals done which we’ll be announcing them once we are ready to launch the product commercially.

[Kim and Suneet Skyped from their homes in Sydney and Toronto.]


Speaking to the Future: What Got Caught in the Safer Internet?

[I’ve recently been asked by several readers to share a piece I initially wrote, for young teen readers, to commemorate and celebrate Safer Internet Day 2012.  This piece was written with a view to instigate and enable conversations between young people and adults, parents and children, about the problems and potential solutions surrounding internet safety.  Here is that piece, which is not part of the ‘Capital I’ Innovation Interview Series:]

When I was asked to write a piece about the future of internet safety, I realized that I am not generally one to give my opinion – on paper at least.  Generally, my job is to interview people and note their opinions.  With that in mind, I decided to interview the future me, the me of 2022, ten years hence, and hear her opinions about the then, current, state of security on the net.

An interview with Kim Chandler McDonald, Executive Vice President and Co-Founder of KimmiC, futurist and hyper-technology expert: February 8, 2022 – Sydney, Australia

Kim Chandler McDonald (as she hopes to look in 2022)

What led you to become involved with Safer Internet Day?

I first became involved over ten years ago when I was asked to write a piece about my views on the future of internet safety, a subject I was, and still am very interested in.

Why the interest?  Surely now, after ten years, the internet is much safer.

Oh yes, certainly compared to 2012 the difference is quite striking – especially when it comes to personal data. When I first became involved with Safer Internet Day, the internet was a place with few ‘walls’ and almost no one was able to ‘lock the door’ to their data.

Very few people were aware that they owned their own data. Though the data wasn’t owned by social media sites, they did borrow it – often without permission – and they made money from it, either by selling the data itself, or by using it to sell us things.

That borrowing often led to random strangers being able to access information about us which they shouldn’t have been able to get to.

You make it sound a little like stealing.

I wouldn’t go that far, but… well, lets just say that I’m very glad we now have the power, the the responsibility, to guard our ‘property’ – the place we live on the net – and the stuff we have there… our data.

How did that happen?

It started with IdentityTech authentication protocols.  Once authentication of parties involved in a communication stream became necessary, and individuals were able to control this process themselves – i.e. you decided who had permission to contact you, be it individuals or companies – the common ‘phishing’ communications (or spam) of the first 10-15 years of this century soon dried up.

It’s funny, because we now look back at that time, without permission based contact and authentication, as anarchy.

Was it really that bad?

In some ways, worse than bad.  Lets look at it this way, IdentityTech gave us the power to protect ourselves and our property, so that strangers couldn’t get at it. Lets think of the internet like a house – your online house.  Can you imagine someone you don’t know wandering into your house and rummaging through your things?  Essentially, that’s what was happening on the internet.

IdentityTech gave you a lock and key to your online house.  Now strangers can’t barge into your house and start looking at your pictures and reading your diary.  Anyone who wants to do that has to have your permission. Its sounds like a small change but it actually had a very big effect, on individuals and on some very large companies and industries as well.

How did it change things for individuals?

I’m sure there are countless ways, but a few that come to mind are things like the reduction in online predators (people preying on the vulnerable or less experienced), cyber-bullying, identity theft, and the reduced proliferation of violent/hate sites.  All these things had a huge effect, not just for individuals, but for communities as well.

A safer internet seemed to spread out and be reflected in safer neighbourhoods, town, cities and countries.  I think that’s part and parcel of us deciding to take more responsibility for what we allowed in our lives via the net.

You mentioned changes to companies and industries as a a result of this IdentityTech, can you give me an example?

Well, lets take social media as an example.  Certainly there was a time when social media companies would collect and use information about people.

You make it sound like something out of a spy novel.

That’s funny.  No, that’s not what I meant.  But, it is true that these companies took your data and used it to make money for themselves – they acted like they owned it.  I guess we, the public, didn’t know better at that time… and maybe we were a bit lazy too.  But this changed as the new digital economy matured.  That was already beginning to happen by mid-2012.

One of the consequences of the new digital economy, which IdentityTech enabled, was the realization by individuals – people like you and me – that our data is just that, OUR data.  It didn’t belong to anyone else, and it certainly couldn’t be used, or sold, by anyone else without our permission.

Once people realized that they owned their data, that it had value, and that they could have control over who, when and where this information was provided to other parties, things began to change rapidly.  Data was acknowledged to be a unit of the connected economy, and though it could be available 24/7, it had to be done so in a universally secure and non-proprietorial way – hyper tech enabled that.

But social media companies are still here, and some are still flourishing.

Of course they are, but now they have to share revenue from any profits they make from using our data.

Okay, I don’t get a personal cheque from them each month, but I am pleased that they have to deposit ‘our’ money into trusts, which have been set up to put money back into the public domain and pay for things like the free broadband connectivity which everyone enjoys today.

Zen and the Art of Software: The Innovation Interview with Grady Booch (Part 3)

In parts one and two of our chat with  software star Grady Booch, we discussed his magnum opus project  COMPUTING: The Human Experience, Innovation, the Computer History Museum and the possible changing brain structure of Millennials, among many other things.

In this, the final segment of our discussion with him, we look at software – and software architecture – in general, Grady’s relationship with it in particular, the troubles facing Google and Facebook, the web, and his views on the SOPA, PIPA and ACTA bills.

To view the full introduction to this multi-part interview with Grady: Click here

Grady Booch: Capital I Interview Series – Number 14

[This was a joint conversation between Grady, Michael and myself. I’ve italicised Michael’s questions to Ora so you are able to differentiate between us – though, I think it will become obvious as you read – lol!]

Grady, You are credited with the building, writing and architecting of so much technology;  of all of those things, what is it that you are most proud of?

There are three things.  The first one is very personal.  My godson – he would have been eight or nine at the time –  was given a task at his school to write about a hero, and he wrote about me.  That was pretty cool!  Everything else is details, but I’m really proud of that one.

On a technical basis, I’m pleased with the creation of the UML, not just because of the thing we created, but the whole idea and industry around it that being able to visualise and reason about software-intensive systems in this way is a good thing. So, I think we lifted the tide for a lot of folks.

UML Diagrams

I contributed to the notion of architecture and looking at it from multiple views, and how to represent it.  I feel good about the whole thing around modelling and architecture and abstraction.  I think I helped people and I feel good about that.

UML was certainly a game changer.  I remember when it came in, before you got bought up by IBM.  It was like a wave going across the globe.  It made a profound difference.

And it’s different now because it’s part of the oxygen.  Not everybody is using it, that’s okay – not everybody is using C++ or Java and that’s fine – but I think we changed the way people think.

Our estimates are that UML has a penetration of somewhere around 15 to 20 percent of the marketplace.  That’s a nice number.  We’ve changed the way people build things.

Absolutely, especially at the big end of the market.

Yeah.  I wrote an article in my architecture column, that tells the story of when I was dealing with my aneurysm.  I was laying in a CT scan machine in the Mayo Clinic, looking up and saying: “My gosh, I know the people who wrote the software for this, and they’ve used my methods.”  That’s a very humbling concept.

It’s a pretty much a pretty good Acid test, isn’t it.

Yes, it is.

And your work is continuing in architecture…

Correct. I continue on with the handbook of software architecture, and a lot of what I do, in both the research side and with customers, is to help them in the transformation of their architectures.

For IBM the last nine months or so I’ve been working with the Watson team – the Jeopardy playing game – and helping the teams that are commercialising their technology.

How do you take this two-million line code base, built by 25 men and women, drop it in another domain and give it to a set of people who know nothing about that system.  This is exactly the kind of stuff that I do for customers, and I’ve been helping IBM in that regard.

That would be very challenging.  You’d need somebody with your brain power to actually manage that, I imagine.

Well, it’s not an issue of brain power, it’s an issue of: how does one look at systems like this and reason about them in a meaningful way.  And after the UML comes in – because it allows us to visualise it and the whole notion of architecture as used from multiple dimensions – all these things come together.  That make a two million line code base understandable to the point where I can know where the load-bearing walls are and I can manipulate them.

That is pretty impressive!  You’ve found a way of managing the slicing and dicing of the codebase.

That’s a problem that every organisation faces.  I have an article that talks about the challenges Facebook is going to have.  Because they…. every software-intensive system is a legacy system.  The moment I write a line of code, it becomes part of my legacy…

Especially if you’re successful upfront and gets massive growth, like they did.

Yes, and having large piles of money in your revenue stream often masks the ills of your development process.

Absolutely.

Google’s faced that, Facebook is facing that.  They have about a million lines of [the programming language] PHP that drives the core Facebook.com system – which is really not a lot of code – still built on top of MySQL, and it’s grown and grown over time.

I think, as they split to develop across the coast – because they’re opening up a big office in New York City – that very activity changes the game.  No longer will all of the developers fit within one building, so the social dynamics change.

Inside Facebook's Madison Avenue Offices

Ultimately, what fascinates me about the whole architecture side of it is that it is a problem that lies on the cusp of technology and society.  It’s a technical problem on the one hand – so there are days I’ll show up as an uber geek – and on the other hand, it’s a problem that’s intensely social in nature, so I’ll show up as a ‘Doctor Phil’.

To follow-up on one of Kim’s questions: if you look at the backlog of IT, I think every company of moderate size is still struggling to deliver on business demands. Do you think that architecture helps or, does it actually contributes to the problem?

Architecture can help in two ways.

I’ll give you one good example.  There is a company called OOCL (Orient Overseas Container Line) that I worked with some years ago to help them devise an architecture for their system that tracks containers and all these kind of things.  Their CEO had this brilliant notion: what would happen if we were to take this system and extract all of the domain-specific bits of it and then sell that platform?

By having a focused-upon architecture, they were able to devise a platform – this is a decade before Salesforce.com and these kind of things – and they could then go into a completely new business and dominate that side of the marketplace.   Here is an example where a focused-upon architecture has a real, material, strategic business implication.

The other thing focused-upon architecture offers is, it  is allows you to capture the institutional memory of a piece of software.  The code is the truth, but the code is not the whole truth.  So, in so far as we can retain the tribal memory of why things are the way they are, it helps you preserve the investment you made in building that software in the first place.

What sort of size company are you talking about?  It sounds like the telco space… large Tier 1 and  Tier 2 companies. 

It could be anybody that wants to dominate a particular business.  Salesforce.com built a platform in that space.  Look at Autostar as another example.  Autostar was an attempt by BMW, and others, to define a common architectural platform, hardware and software, for in-car electronics.  By virtue of having that focused-upon architecture, all of a sudden you have unified the marketplace and made the marketplace bigger, because now it’s a platform against which others can plug and play.

There is a similar effort  with MARSSA, which is an attempt to develop a common architectural platform for electronics for boats and yachts.  Again, it eliminates the competition of the marketplace by having a set of standards against which, people can play well together.  In the end, you’ve made the marketplace bigger because it’s now more stable.

I agree. Also, an architectural approach separates the data from an application specific way of looking at things.

It used to be the case that we’d have fierce discussions about operating systems.  Operating systems are part of the plumbing; I don’t care about them that much anymore.  But, what I do care about is the level of plumbing above that.

My observations of what’s happening is that you see domain-specific architectures popping up that provide islands against which people can build things.  Amazon is a good example of such a platform.  Facebook could become that, if they figure out how to do it right – but they haven’t gotten there yet.  I think that’s one of the weaknesses and blind spots Facebook has.

I also think that they are, to a certain extent, a first generation.  I think the web, in terms of connectivity, is not being utilised to its fullest potential.  I don’t see any reason why, for example, any form of smart device shouldn’t be viewed as being a data source that should be able to plug in to these architectures.

Exactly!

Would that be an example of a collaborative development environment?

Well, that’s a different beast altogether.

With regards to collaborative development environments, what led me to interest in that space is emphasising the social side of architecture.  Alan Brown [IBM engineer and rational expert] and I wrote a paper on collaborative environments  almost ten years ago, so it was kind of ahead of its time.

Alan Brown

The reason my thinking was in that space was extrapolating the problem of large-scale software development, as we’re becoming more and more distributed, to just how does one attend to the social problems therein.  If I can’t fit everybody in the same room, which is ideal, then what are the kinds of things that I do that can help me build systems.

I’ve observed two things that are fundamental to crack to make this successful.  The first is the notion of trust: in so far as I can work well with someone, it’s because I trust them.  You, Kim, trust your husband Michael, and therefore there is this unspoken language between the two of you that allows you to do things that no other two people can do together.

Now, move that up to a development team, where you work and labour together in a room, where you understand one another well.  The problem comes – like with Facebook, and what we’ve done in outsourcing – when you break apart your teams (for financial reasons) across the country or across the world.  Then, all of a sudden, you’ve changed the normal mechanisms we have for building trust.   Then the question on the table is: what can one do to provide mechanisms to provide building of trust?  That’s what drives a lot of ideas in collaborative development environments.

The other thing is the importance of serendipity – the opportunity to connect with people in ways that are unanticipated, this option of ‘just trying things out’.  You need to have that ability too.  The way we split teams across the world doesn’t encourage either trust or serendipity.  So, a lot of ideas regarding collaborative environments were simply: “What can we do to inject those two very human elements into this scheme?”

As we have been talking about trust, I’m curious as to your opinion on the SOPA, PIPA and ACTA bills.

I’ve Tweeted about it, and I’m pretty clear that I think those bills are so ill-structured as to be dangerous.

I get the concept, I understand the issues of privacy and the like, and I think something needs to be done here.  But I’m disturbed by both the process that got us there and the results.  Disturbed by the process in the sense that the people who created the bills seemed to actively ignore advice from the technical community, and were more interested in hearing the voices of those whose financial interest would be protected by such a bill.

The analogy I make would be as if all of a sudden you make roads illegal because people do illegal things in their cars.  It’s stupid the way the process that led up to this bill was set, I think, because it was very, very political.  From a technical perspective, while I respect what needs to be done here, the actual details of it are so wrong – they lead you to do things to the web that are very, very destructive indeed.  That’s why I’m strongly, strongly opposed to it. And I have to say that this is my personal opinion, not that of IBM, etc.

This is the final segment of our multi-part interview with Grady Booch. Part One can be read here, and Part Two can be read here

You can learn more about Grady via the COMPUTING: The Human Experience website, Grady’s blog and his Twitter feed. Be sure to keep your eye on the COMPUTING: The Human Experience YouTube channel where Grady’s lecture series will be posted.

[Kim, Michael and Grady Skyped from their homes in Sydney and Hawaii.]

Zen and the Art of Software: The Innovation Interview with Grady Booch (Part 2)

In the pantheon of world-famous computer scientist’s, Grady Booch is the star who co-authored the Unified Modeling Language (UML) and co-developed object-oriented programming (OOP). He is a Fellow of  IBM, ACM, the IEEE, the author of six books, hundreds of articles and papers and the Booch Method of Software engineering. Grady serves on several boards, including that of the Computer History Museum and is the author, narrator and co-creator of what could be seen as a historical magnum opus of the technological world, COMPUTING: The Human Experience. 

To view the full introduction to this multi-part interview with Grady, and Part 1 of the series: Click here

Grady Booch: Capital I Interview Series – Number 14

[This was a joint conversation between Grady, Michael and myself. I’ve italicised Michael’s questions to Ora so you are able to differentiate between us – though, I think it will become obvious as you read – lol!]

Grady, lets begin with the very basics. As this is the Innovation Interview Series, let’s start with: how do you define innovation?

Ecclesiastes 1:9 has this great phrase:

What has been will be again.  What has been done before will be done again.  There is nothing new under the Sun“.

The way I take it is that innovation – really deep innovation – is about warming the Earth with the Sun of your own making. And to that end, that’s how I distinguish the ‘small i’ from the ‘Big I’.

The ‘small i’ therefore means: I may have a brilliant idea and it warms me, but the ‘Big I’ Innovation is where I can start warming others.  There are new suns possible; there are new ways of warming the Earth… And I think innovation is about doing so.

One of my heroes is the physicist Richard Feynman. If you read any of his stuff or watch his physics lectures – which are just absolutely incredible [Ed. Note: As is his series: The Pleasure of Finding Things Out] – there are some conclusions you can draw (and there is a delightful article someone wrote about the nine things they learned from Feynman.  The way I frame it is to say that I admire him and his innovation because he was intensely curious but at the same time he was bold, he was not fearful of going down a path that interested him. At the same time (too) he was also very child-like and very, very playful.  In the end what really inspires me from Feynman’s work is he was never afraid to fail, but much like Joseph Campbell observes, he followed his bliss.

Richard Feynman

I think that many innovators are often isolated because we’re the ones who are following our bliss; we really don’t care if others have that same bliss.  We are so consumed by that, that we follow it where it leads us, and we do so in a very innocent, playful way… We are not afraid to fail.

I’ve noticed that there is often a level of audacity and a lack of fear within innovators, but sometimes I wonder if that audacity and lack of fear could frighten general society.

Well, I think there’s a fine line between audacity and madness.

And that depends on what side of the fence you’re on.

Exactly. It also depends upon the cultural times. Because, what Galileo said in his time [that the earth and planets revolve around the sun] was not just audacious, it was threatening.

To the church, absolutely.

In a different time and place [the response to] Galileo would have been: “Well, yeah, that’s right. Let’s move on now”.   [Instead of being tried by the Inquisition, found suspect of heresy, forced to recant and spend the rest of his life under house arrest.]  The sad thing is you may have the most brilliant idea in the world, but you will never go anywhere.

Take a look historically at Charles Babbage.  I think he was a brilliant man who had some wonderful ideas; he was very audacious, and yet he’s a tragic figure because he never really understood how to turn his ideas into reality.  [A mathematician, philosopher, inventor and engineer; Babbage originated the idea of a programmable computer.]  That’s what ‘Capital I’ mean to me.  I think that’s why Steve Jobs was so brilliant; it’s not just that he had cool ideas, but he knew how to turn that into an industry.

We have a golden rule that it really doesn’t matter how cool your tech is if nobody’s using it. And it’s a shame because there are some incredible innovations out there, but so many innovators haven’t learned the Job’s magic of marketing.

KimmiC rule: It doesn’t matter how ‘bright the light’ if no one is using it to read.  

I think that’s especially true of our domain of computing systems, because we are ones who are most comfortable – as a gross generalisation – with controlling our machines.  Being able to connect with humans is a very different skill set. To find people who have the ability to do both is very, very challenging indeed.

Zuckerberg is a brilliant programmer, and he had the sense to surround himself with the right people so that he could make those things [Facebook] manifest.  There are probably dozens upon dozens of Zuckerbergs out there, who had similar ideas at the same time, but they didn’t know how to turn them into reality.

The same thing could be said of Tim Berners-Lee: a brilliant man, a nice man…  He was in the right time at the right place and he knew how to push the technology that he was doing.  He was developing things that were in a vast primordial soup of ideas.

Tim Berners-Lee

HyperCard was out; and why didn’t HyperCard succeed while Tim’s work did?  Part of it is technical, part of it just the will of Apple, and part was his [Tim] being in the right place at the right time.

And HyperCard influenced Tim.  Even Bill Atkinson, creator of HyperCard, said: if only he had come up with the notion of being able to link across [Hyper]card decks, then he would have invented the prototypical web.  But, he didn’t do it, he didn’t think about it.

Do you feel that you are ‘in the right time,  at the right place’?

There are times that I think I was born in the wrong century, but I know that if I had been born in the Middle Ages, at my age, I would be long dead.

So, yes, I can say from a very philosophical basis: I am quite content with the time in which I am now living, because I cannot conceive of any other time in which I could have been successful.

I read a quote on Wikipedia… a story you apparently told:

… I pounded the doors at the local IBM sales office until a salesman took pity on me. After we chatted for a while, he handed me a Fortran [manual]. I’m sure he gave it to me thinking, “I’ll never hear from this kid again.” I returned the following week saying, “This is really cool. I’ve read the whole thing and have written a small program. Where can I find a computer?” The fellow, to my delight, found me programming time on an IBM 1130 on weekends and late-evening hours. That was my first programming experience, and I must thank that anonymous IBM salesman for launching my career.”

It sounds like you were quite fortunate to have bumped into someone who was willing to take a chance with you very early on.

I think that’s fair to say.  Though, if it hadn’t been that person, I imagine the universe would have conspired to find me another person, because I was so driven.   Looking backward upon fifty-some years passed, that was the right time and place.  It may have just happened to be that was the right time and guy. But there would have been others.

Grady Presenting

[But] I haven’t told you about the missteps I had and the people who rejected me; we just talk about the successes.  Historians are the ones who write history. Because it’s the history of the winners, we don’t tend to write about the failures.  But even Edison pointed out… I forget the exact quote, but the reason he succeeded so much is he’s done so much and he’s failed; he’s failed more than others on an absolute basis, but he tried more.

“I have not failed. I’ve just found 10,000 ways that won’t work.” ― Thomas A. Edison

What, in your view, gets in the way of the success of innovation?

I think the main thing is the fear of failure. I run across people like Babbage for example… or this gentleman I was mentoring earlier today, who are so fearful that they’re not doing something absolutely perfect, they are afraid to turn it into a reality. I think some innovators are so enamoured with perfection they are afraid to fail and therefore never do anything.

Within this milieu you seem to have had your fingers in many interesting pies.  One that I think must be especially fascinating is your work with the Computer History Museum.  How did you get involved in that?

In a way they came to me.  My interest has been in software, it always has been.  I forget the circumstances but, some years ago, I connected with John Toole, who was the original CEO of the Computer History Museum when it was in the Bay Area. He showed me around the warehouse that they had set aside at Moffett Airfield.

Not long before that they had shipped a lot of the materials from the old computer museum in Boston out to the Bay Area.  Gordon Moore [co-founder and Chairman Emeritus of Intel] and others had said they wanted to make a museum, and they funded that effort.  So, I was around the edges of it in the early days. I thought it was fascinating.

I think the reason it attracted me in the first place, in general, is that I have an interest in the appreciation of history, not just the history of technology, but just the history of humanity.

As I went to the exhibits I remember making the observation to John that I thought their plans were great, but, projecting out to one or two generations, there wasn’t going to be too much that was interesting to display in the museum because, by then, all of the hardware would have shrunk to a tiny size and we’d need microscopes in the exhibits.

“And so, therefore, John”, I said, “what are you doing about preserving the history of software,” which is a very ephemeral thing.

Think about getting the original source code to the [IBM Operating System] 360, or the original source code to Facebook.  Because these are such ephemeral things, people throw them away.  In fact we, IBM, no longer have the original source code to the first version of OS/360; it’s gone.  There are later versions but not the original one.

Facebook Source Code

When Microsoft decided to stop production on the Microsoft Flight Simulator, I mean, this was a ground-breaking program, I wrote off to Ray Ozzie [Microsoft CTO and CTA from 2005 – 2010] and said: “What are you guys going to do with the software? Can we have it?”   He munched around for a while, but I think it’s lost for all time.

We’re in an interesting period of time and my passion, which has led me to the museum, is to say: Now is to time to preserve software!  We don’t know how to present it, we don’t know what to do with it once we have it, but let’s worry about that in future generations and just capture it now.

It’s very similar to what Hollywood has found with a lot of their film stock. A lot of it was just being lost or destroyed, but there is so much cultural history in those records.

Yes, exactly.  So, prior to being on the board, I set up a workshop at the museum looking at the preservation of classic software.  I wrote to 500 of my ‘closest friends’… people ranging from Marvin Minsky [cognitive scientist in the field of AI] to some other developers I knew, and everybody in between, and asked: “What software would you preserve for future generations?”

We came up with a long list.  I think that very idea inspired Len Shustek, who’s the president of the museum, to invite me on to be on the board of trustees.

What is your favourite exhibit in the museum?

I like the [IBM] 1401 reproduction.  They have a couple of 1401 machines and they’ve gotten them running again.  It’s fun to be in a place where there is something dynamic and alive and runs and you can be in the midst of it.  Just walking into the room, you smell old computers; and that’s a pretty cool kind of smell.  So, is the fact it’s running and clacking away.

The 1401

Fred Brooks [IBM software engineer] and I had an interesting discussion once, in which I lamented the fact that our computers make no noise, because – and I know I sound like an old guy, but – I remember you could hear some of the earlier computers I worked on. They were clattering in one way or another, be it their hard drives or their tapes, and you could get a feel for where the program was just by listening.

You can’t do that now with our machines; they are all very, very quiet. So, the 1401 exhibit has this wonderful visceral immersive display, in which you hear it and smell it as it processes.

I’ve actually seen people get a little misty-eyed just thinking about a dial-up tone, and you certainly seem to have some ‘misty memories’ too.  But, let’s look forward now.  What new things do you think may be exhibited in ten years time.

I think that’s the next interesting challenge.  We know how to display physical things, but there aren’t that many more things like old machines, to collect because they are disappearing.

If you go to the exhibits, you’ll see things get smaller and smaller and there is more of an interest in software.  I think the interesting problem for the museum to attempt is: how do we present software to the general public so that we open the curtain on it and show some of the magic and the mystery therein.  I think software can be very beautiful, but how do I explain that to someone who can’t see software. That’s an interesting challenge.

You’ve got to look at it it like an art form.  Source code, especially some of the well-written stuff, looks physically beautiful; forget about what it actually does.  There are many different dimensions you can look at try to get people’s interest.

[Editors Challenge to artists: here is a piece of code I’ve ‘mucked about with’ 

– why not see what code inspires you to create and send us a picture, which we’ll share with our readers, Grady Booch and the Computer History Museum!]

I think it’s very much like modern art because you can look at a bit of an impressionistic painting and you may not get it. Often the reactions are: “My kid could do that kind of thing.”

Well, not exactly; because the more you learn about it, the more you learn how much that painting – or whatever the art form is –  speaks to you and tells you stories.  It requires a little bit of education.

There is a visceral reaction at first to some art but the more you know about it, the more you can appreciate its subtlety.  I think the same is true of software.  We (the museum) have collected the original source code to Mac Paint, which turns out to be a really beautiful piece of software.

I’m using a phrase here that has meaning to me – beautiful – but requires explanation to the general public to say: why is this a beautiful piece of code, why does it look so well-formed?  I think that’s a responsibility we have as insiders to explain and teach that kind of beauty.

What are your thoughts about the emerging trends in Innovation and technology?

Well, the web has been an amazing multiplier, and yet at the same time it’s also increased the noise.  Therefore, the ability to find the real gems in the midst of all this madness is increasingly challenging.  For example, with the computing project  [COMPUTING: The Human Experience] we’ve done, we crowdsourced some initial seed funding for our work.

We could not have done this in the past without something like the web.  We put this appeal out to the world and it gave us access to people, otherwise we could not have done it.  I think the web has produced an amazing primordial soup of ideas into which we can tap; and that is so game-changing in so many ways.  That’s probably the biggest thing. [You can contribute to and volunteer for the project here.]

The web has changed everything; and those who don’t keep up are doomed to be buggy web producers.

Yes, exactly.  Or companies like Kodak.

I had the opportunity to speak to Kodak’s developers about 15 years ago.  It was a small group of people who were in the computer side of Kodak, and I remember saying to them: “Look guys, the future of Kodak is in your hands… so, what are you going to do about it?”

I Tweeted about it not too long ago with a sort of “I told you so.”  And yet, I don’t know whether or not it was inevitable.  It could be the case that some businesses simply die because they just don’t make sense any more.

And they should die sometimes.  But I think early IBM was a good example of a company that understood what business it was in.  I don’t think Kodak really understood what business it was in, towards the end, and that’s what killed it.

I agree, very much so.

Some web business models are founded on the idea that a company has a right to use and profit from an individuals data and personal information… What are your thoughts on that? Do you think that that’s a business model that’s sustainable? I believe that the general public is wising up to this very quickly and are soon going to expect some recompense from the use of their data.

I think there is a local issue and there is global issue that is even harder to tackle.  In the case of the Facebooks and the Twitters of the world, the reality is when I subscribe to those services, I do have a choice – I can chose whether or not to use them.  And, by the very fact that I’m using those services means I am giving up something in the process.

So, why should I be outraged if those companies are using my data, because I’m getting those services for free.  It seems like a reasonable exchange here, and I, as an adult, have the responsibility of choice.  Where it becomes nasty is when I no longer have choice; when that choice is taken away from me.  That’s when it becomes outrageous: when my data is being used beyond my control [in a way] that I did not expect.

I think that will sort itself over time; capitalism has a wonderful way of sorting things.  It’s also the case that we have a generation behind the three of us who are growing up, if not born, digital.  They have a very different sense of privacy, so, I’m not so concerned about it. We have lots of ‘heat and smoke’ but it will resolve itself.

What I find curious is that the ‘heat and smoke’ and discussions are hardly any different from what was initially said about telephones or, for that matter, the printing of the book.  Look at some histories of how phones were brought into the marketplace and you’ll find almost identical arguments to those that are going on today.

I trust the human spirit and the way capitalism works to find a way.  What’s more challenging is the larger issue, and that is the reality that there are connections that can be made in the presence of this data that are simply beyond anybody’s control.

I may choose to share some information on a social media source, or I may use a credit card or whatever, but the very act of participating in the modern society leaves behind a trail of digital detritus.  And I can’t stop that unless I choose to stop participating in the modern world.

I think this is a case where we’ll have politicians do some profoundly stupid things, and we’ll see lots of interesting cases around it.  But, we’ll get used to it.  I mean, people didn’t like the idea of putting their money in a bank for God’s sake, and we got used to it; I think the same thing will happen.

You brought up the Millennials – the digitised generation. What insights would you give them in being game-changers?”

Does any young adult ever want the advice of their elders?

I didn’t ask if they wanted it… 🙂

You know… I think, we laugh about it, but the reality is – and I think Jobs said it well: “Death is a wonderful invention because it allows us to get out of the way and let the next generation find their own way.”  I’m comforted by that; I find great peace in that notion.  They need to have the opportunity to fail and find their own way.  If I were born a Millennial, I’d be growing up in an environment that’s vastly different than mine.

Though, in the end, we are all born, we all die, and we all live a human experience in various ways, there are common threads there… the stories are the same for all of us.  I think those are the kinds of things that are passed on from generation to generation, but everything else is details.

I would not be surprised if the structuring of their brain is different to ours.  I’ve been talking to guys that are 10 – 15 years younger than me, and the ability to hold their train of thought over weeks or months – when you’re doing some serious development or research – they seem to find that extremely difficult.  So, I wonder if we’ll see any really big innovations coming through from those generations.

You could claim that it’s not just the web that’s done that, but it’s back to Sesame Street and the notion of bright, shiny objects that are in and out of our view in a very short time frame.  Certainly I think a case can be made that our brains are changing; we are co-evolving with computing – we truly are.

But, at the same time, throw me in the woods and I couldn’t find my way out of it easily; I can’t track myself well, I can’t tell you what things are good to eat and what things aren’t.  Those are survival skill that someone would have needed to have had a century or two ago.  So, my brain has changed in that regard, just as the Millennials’ brains are changing. Is it a good thing? Is it a bad thing? I’m not at a point to judge it, but it is a thing.

End of Part Two.  Part Three will be published next week – sign up for the blog and it will be delivered directly to your inbox!

You can learn more about Grady via the COMPUTING: The Human Experience website, Grady’s blog and his Twitter feed. Be sure to keep your eye on the COMPUTING: The Human Experience YouTube channel where Grady’s lecture series will be posted.

[Kim, Michael and Grady Skyped from their homes in Sydney and Hawaii.]

Zen and the Art of Software: The Innovation Interview with Grady Booch (Part 1)

One of the greatest things about ‘Flat World Navigating’ the internet, is that it enables connections with fascinating minds, even if from a distance.  If you are able to then reach out to those magnificent minds and invite them to have a chat – the encounter can be transformational.  Such was the case with Grady Booch, who is, I believe, a most genial genius – a man who brings Zen to Art of Software.

Grady Booch: Capital I Interview Series – Number 14

I first encountered Grady Booch via his project, COMPUTING: The Human Experience, “a transmedia project engaging audiences of all ages in the story of the technology that has changed humanity.” I was immediately hooked on the concept, and wanted to discover the mega-mind who thought to pull this off.

In the pantheon of world-famous computer scientist’s, Grady Booch is the star who co-authored the Unified Modeling Language (UML) and was one of the original developers of object-oriented programming (OOP). That alone would be immensely impressive, but it is far from the end of Grady’s long list of credits, which include being an IBM Fellow (IBM’s highest technical position) and Chief Scientist for Software Engineering at the IBM Thomas J. Watson Research Center.

In fact, he’s quite a fella, being a fellow the Association for Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers (IEEE) and the World Technology Network (WTN) as well as being a Software Development Forum Visionary and recipient of Dr. Dobb’s Excellence in Programming Award and three – yes three! – Jolt Awards .

There is a rumour (one which he doesn’t discuss), that Grady was approached to takeover from Bill Gates as by Microsoft’s chief software architect.  What is not a rumour, and what Grady does admit to, is that he taught himself to program in 1968 and had built his first computer a year earlier – at the age of 12.

He is the author of six books, hundreds of articles, and papers that originated in the term and practice of object-oriented design (OOD) and collaborative development environments (CDE), and the Booch Method of Software engineering. Grady serves on the advisory board of the International Association of Software Architects (IASA), the IEEE Software editorial boards and the board of the Computer History Museum.

Yes, with all that (and more) to his credit, Grady could quite comfortably sit on his laurels, and yet, instead he is the author, narrator and co-creator of what could be seen as a historical magnum opus of the technological world, COMPUTING: The Human Experience.

“At the intersection of humanity and technology is COMPUTING. From the abacus to the iPad, from Gutenberg to Google, from the Enigma machine designed to crack the codes of the Nazi SS to the Large Hadron Collider designed to crack the code of the universe, from Pong to Halo, we have created computing to count the uncountable, remember beyond our own experience, touch the invisible and see the unforeseeable. COMPUTING: The Human Experience is a brilliant and surprising insider view of the hidden stories of passion, greed, rebellion, rage and creation that created the technologies that are everywhere, transforming our world, our lives, and who we are as a species.”

Grady is not alone in this endeavour, working as he does with a tremendous creative team which includes, among others: Grammy Award winner, Seth Friedman; President of the Computer History Museum, John Hollar; and psychotherapist/theologian/social worker Jan Booch, Grady’s wife, co-writer and co-creator of this obvious labour of love. The series will include lectures, books, videos, an interactive website, and much more.

February 24, 2012 sees Grady launch the first in a series of lecture series at the Computer History Museum in Mountain View California.  For those readers who are not lucky enough to be in the vicinity to attendWoven on the Loom of Sorrow: The Co-Evolution of Computing and Conflict’, I hope you will enjoy reading this multi-part Innovation Interview with Grady as much as Michael and I enjoyed talking to him!

Grady Booch: Capital I Interview Series – Number 14 

Grady, when I clicked on the link from your LinkedIn profile, I was extremely excited by the idea of the COMPUTING: The Human Experience and found it to be immensely interesting!  What made you feel that it was important to compute the human experience?

I think it has to do a lot of where I am in my life.  In the sense that I have nothing left to prove, if you will, and I could do what I want to do.  I could just happily fade away into an existence here.  But, I think part of it is wanting to give back to the community that has given so much to me; and being able to express to the general public my child-like joy and delight at what I do.  That’s why I think I chose to go down this path of telling the story.

In the end, I’m a story teller, and I think there is a story to be told here. There’s probably some other factors that happened that led me in this direction. Just random stories… A side conversation with one of our goddaughters…

We were talking to her about computing stuff, and she said:

“Oh, I know everything there is to know about computing. Because I’ve taken a class.”
“Oh, what did you learn?”
“Well, in my class we learned how to write a Word document and how to surf the web.”
I was like: “Oh, my gosh; there is so much more!”

It’s things like that that have led me to say… We’ve created this technology, and I’m responsible for helping create that technology, and we as a civilisation have chosen to step inside and live inside it. We’ve created a world and yet most of people in the world don’t understand it and can’t understand how to use it to their advantage.

I think my goal is: let’s open the curtain and explain some of that matter, and the mystery, beauty, excitement, and human stories that lead to it.

I think there is a lot of latent interest there, that is untapped at the moment.

I think so; I hope so.  Well, there is a lot of interest in anything.  Why do you think we still watch celebrities like Paris Hilton? It’s amazing what people get interested in.

But I think here is a topic that has profoundly changed humanity, and we are at the time and place where we can talk about it.  And the people who made these changes… many of them are still alive, so let’s get their stories and tell that to the world!

The phrase I often use is: “An educated populous is far better able to reconcile its past, reason about its present and shape its future.”  And I want to help contribute to educating that populous.

You don’t shy away from contentious topics, either. Such as: computing and war, computing and faith, and computing and politics. What are your thoughts on these subjects?

It’s interesting you called them controversial, because I see them as simply part of human experience.  The reality is that there are billions of people, a billion Muslims, a billion Christians, and lots of others who profess a faith of some sort.  So, to not talk about faith denies an element of the human experience; to not talk about war denies the existence of warfare.  It’s not intentionally controversial, it’s a recognition that this is part of the human experience, and that it’s reasonable for us to consider what role computing has played in it.

So, let’s take computing and war for example. This is the one that I’ll be giving my first lecture [Woven on the Loom of Sorrow: The Co-Evolution of Computing and Conflict] on at the Computer History Museum on February 24.  My premise is that war is part of the human experience, for better or worse.

By the way, a background you must recognise was that I trained to be a warrior.  I went to the Air Force Academy and I learned about war, and many of my classmates have killed people in anger in warfare.  It’s part of the life in which I have lived.

And yet, if you look at the parallel story of computing and warfare, the conclusion I draw is that computing was, at one time, a companion to warfare; it now is a means of warfare, and it’s quickly becoming a place of warfare.  I’d like to tell that story: an observation, from an insider, of how computing has both enabled and been shaped by warfare.

I think the average person would be surprised to know that your average smart phone, and a considerable amount of technology, exists simply because of what happened during the Cold War and World War II.

2012 is the centenary of Alan Turning's birth

There are surprises in those regards.  There are also some incredible personal stories. The tragic story of Alan Turing... [considered to be the father of computer science and AI]

Absolutely!

Who changed the course of World War II.  He saved a nation, and yet that very nation eventually condemned him because he was homosexual. Go figure!

Will the lecture be something that people around the world will be able to access?

Our intent is to make it available on our YouTube channel and the museum’s channel. And I believe the local PBS station, QED, has an interest in making it available on their channels as well.

Wonderful!

So, yeah, we’re going to see a wide distribution of this.  Ultimately, you can view this as the alpha (or beta) of what we’re trying to do with the series.  One of the main things we’d like to get out to the world is an eleven-part series for broadcast. This [lecture] is not the broadcast, but we’re talking about it and this is one of the lectures about it.

What is the end product, or goal, of the COMPUTING: The Human Experience project? Would you say that the series is the end product, or is it something that doesn’t necessarily have to have an end?

It won’t ever have an end because I hope we will develop a dialogue with the public that goes on far beyond this.

Look at Sagan’s Cosmos; it’s still being seen to this day.  I hope, and I certainly strive, to produce something as interesting and as timeless that.  So, I’ll put it in the terms of [political scientist] Herbert Simon:  ‘What our intermediate stable forms are‘…  We want to produce eleven one-hour episodes (that’s a big thing), have a book, an e-book, curriculum materials, some Aaps.  Those are the physical things we’ll actually be delivering.

To that end, you’ve already gone through one very successful Kickstarter funding round.  I’m sure there will be others, but, other than helping to fund the project, what can readers of the Innovation interviews do to help you, and the project, reach some of those goals?

I think there are two things: My wife Jan and I have self-funded this for the last four years, but we’ve now gone to funding, like with Kickstarter – the very process of doing a Kickstarter has brought a number of volunteers to us.  In the next few years, we need to raise about eleven-million dollars to pull this off.  We’re going to foundations, we’re talking to individuals, and we’ll continue on that path.

Grady and Jan Booch

In a recent interview with Grady, Darryl K. Taft noted, “Meanwhile, Jan’s role on the project is multi-faceted.  As a social worker, she attends to issues of multiculturalism, inclusivism and the impact computing has had on society.  As a psychotherapist, her focus is on how human desires and needs have shaped and continue to shape the development of computing technology.  As a theologian, her focus is on the moral and ethical issues found in the story of computing.  Finally, as a non-technical person, she assures that the stories will be approachable, understandable and interesting to the general public.”

Working on the book and lecture series allows us to continue story development in a very, very low-cost kind of way. So, one of the things that I hope people can do is to say: “Hey! I know a guy who knows a guy, who works for this person, and they may be interested.” I hope we can find some serendipitous connections to people with whom we can find some funding.

I know foundations within the US, but I don’t know what opportunities there are in other parts of the world; we’re telling a global story so I hope we can get some connections that way.

The second is: I hope that people will look at this and say: “This is interesting. I think you should tell this story or that story.”  And so I hope from this people will come to us and help inform us as to what they thing the world should know about.

[They hope to collect more than 2,000 human experience videos for their YouTube channel, so don’t be shy, make a video!]

Along with a magnificent creative team, you have an extremely eminent board for the COMPUTING: The Human Experience project. In particular, I must note Vint Cerf, who helped me kick off the Innovation interview series and really was integral in its initial success. How did you gather those people around you?

My philosophy is to surround myself with people far smarter than I am, because they know things that I will never know.  I want to be able to go to them for two reasons: one is as a source of information, and the second is as a source of contacts.

Tim O'Reilly

I reached out to this set of people and I’m going to be growing the board to around 20 or 30 total for people who have specific expertise and who have been game changers in certain domains.

I’ll give you a great example of how this has worked well: Vint, Tim O’Reilly and Mary Shaw have been particularly useful for me thus far, but for developing the lecture on computing and warfare, one of the people on my board is Lt. Gen. William Lord, who happens to be the Chief Information Officer and Chief War Fighting Officer of the Air Force.

Mary Shaw

He has helped me out because I wanted to get some information that simply doesn’t exist in ‘the literature’: what’s the current doctrine at the war colleges about the use of Predators… what are people thinking?  He put me in touch with people who have that source of information.

Lt. Gen. William Lord

Tim has been able to do similar kinds of things.  The computing community, at one level, is a relatively small community; we all kind of know all the movers and shakers.  Well, let’s get them to be a part of this, because I’m also celebrating their story!

You can learn more about Grady via the COMPUTING: The Human Experience website, Grady’s blog and his Twitter feed.

This is part one of a multi-part interview with Grady, be sure to look out for the next instalment – Part Two can be viewed here and part three here.

If you’re in the San Francisco area on the 24th of February, I heartily suggest you try and attend Grady’s lecture. If you, like me, are unable to attend, be sure to keep your eye on the COMPUTING: The Human Experience YouTube channel where the lectures will be posted.

[Note: the lecture has now been posted on the Computer History Museum YouTube channel.  Thanks  to John Hollar for letting us know!]

[Kim, Michael and Grady Skyped from their homes in Sydney and Hawaii.]


Antics with Semantics: The Innovation Interview with Semantics Pioneer, Ora Lassila

Wanting to speak to someone, both interesting and inspiring, about the Semantic Web and Innovation, Ora Lassila, an Advisory Board Member of the World Wide Web Consortium (W3C) as well as Senior Architect and Technology Strategist for Nokia‘s Location and Commerce Unit, was the obvious ‘go to guy’.

A large part of Ora’s career has been focussed on  the Semantic Web as it applies to mobile and ubiquitous computing at the Nokia Research Center (NRC), where he, among many things, authored ‘Wilbur’, the NRC’s Semantic Web toolkit.   As impressive as that is, as I did my research, finding out more about Ora, the more fascinating he, and his career, became to me.

Ora is one of the originators of the Semantic Web, having been working within the domain since 1996.  He is the co-author (with Tim Berners-Lee and James Hendler) of the, to date, most cited paper in the field, ‘The Semantic Web’.  Ora even worked on the knowledge representation system ‘SCAM’,  which, in 1999, flew on a NASA Deep Space 1 probe.

Leading up to our attendance and presentation at the Berlin Semantic Tech and Business Conference, Michael– the true ‘tech head’ of KimmiC – and I were extremely pleased that Ora, ‘the Mac Daddy’ of the Semantic Web, gave us so much of his time.   I hope you find our conversation with him as interesting as we did!

[I’ve italicised Michael’s questions to Ora so you are able to differentiate between us – though, I think it will become obvious as you read – lol!]

Ora Lassila (photo credit: Grace Lassila)

Ora Lassila: Capital I Interview Series – Number 13

Lets start out by talking about Innovation in general, and we’ll move on to the Semantic Web as we go along.   As this is the Innovation Interview Series, the ‘baseline’ question is always: how do you define Innovation?

Good question.  I think many people do not make a clear distinction between ‘innovation’ and ‘invention’.

To me, ‘innovation’ is something that not only includes some new idea or ideas, but also encompasses the deployment and adoption of such.  You can invent clever new things, but if you don’t figure out how to get people to use those new things, you have fallen short of the mark.

How essential has innovation been in your career to date; and how important do you envisage it being, going forward?

It has been important.  A big part of my professional career was spent in a corporate research lab, where inventing new things was less of a challenge than getting these inventions ‘transferred’ to those parts of the corporation that had more capability in promoting their adoption and deployment.

That said, I have learned that ‘technology transfer’ is not always about taking concrete pieces of technology, software for example, and handing them over to someone else for productization.  Sometimes the transfer is more ‘insidious’ and involves influencing how people in your organisation – or outside your organisation – think and see the world.

I would claim that some of my early work on the Semantic Web absolutely fits this definition.  So writing, publishing and talking all constitute viable means.  Also, we should not forget that people need to be inspired.  You cannot just tell them what to do, instead, they have to want to do it.

What do you think are the main barriers to the success of innovation?

I am not kidding when I say that the absolute biggest obstacle is communication.  That is, we should learn to communicate our ideas better to be able to convince people and to inspire them.  I have much to learn in this area.

Who and what inspires you? Where do you look for inspiration?

I have no good or definite answer for that.  When I was younger I was really inspired by the Spanish aviation pioneer Juan de la Cierva whose simple yet radical idea about aircraft – the ‘autogiro’ – paved the way for the adoption of helicopters.  And yet, one might argue that, in many ways helicopters are a far more complicated and complex technology than de la Cierva’s original invention.

Juan de la Cierva y Codorníu, 1st Count of De La Cierva

I am inspired by simplicity… I strive to create and design things that are simple, or at least not any more complicated than necessary.

What are, in your view, the current emerging critical trends in Innovation and technology?

I like openness, things like open-source software as well as Open Access and sharing of data as part of the scientific process.  I am hoping we see a fundamental change in how research is done.  In many ways we have progressed to a point where many problems are so complex that they are beyond a single researcher’s or research group’s capacity and capability to tackle.

Also, on the topic of openness, I like some of the recent developments in open government, e-Government, and such.

And what are some of the coolest mobile technologies you’re seeing launched? 

I am much enamoured with the idea that mobile technologies – particularly via the use of GPS, etc. – ‘ground’ many services to the physical world.  There are many uses for location information, uses that help me in my everyday life.

Furthermore, by making the mobile device better understand the current ‘context’, not only geographically but also by making use of other observations about the physical world (movement, sound, etc.), we can make applications and services better for users.

Do you think we will have a ‘meshed up’ world that effectively bypasses the stranglehold telcos have on infrastructure?

I don’t necessarily agree that the telcos have a ‘stranglehold’.   They provide an important service and a critical investment in an infrastructure I don’t really see us living without.

But we need things like ‘net neutrality’ to make sure that this infrastructure really serves people in an open and non-discriminatory way.  in this regard I am also concerned about more recent legislative attempts [SOPA, PIPA, ACTA] that (perhaps unintentionally) will hurt the overall technical function of the Internet.

It seems that current Web based business models are founded on the idea that businesses have the right to record everything about users/consumers and profit from this information.  Do you think this is a sustainable business model, or do you think the user/consumer will start to think that they, and their data, is worth something and begin to demand recompense of some sort?

There are very few fundamentally different, viable, business models on the Web, so I can see that businesses would want to cash in on user data.  It is only a matter of time before the consumers ‘wise up’ and understand the value of their own data.  Personally I think we should aim at ‘business arrangements’ where all parties benefit.  This includes concrete benefits to the user, perhaps in a way where the user is a bona fide business partner rather than just someone we collect data about.

It is important to understand that what’s at stake here is not only how some user data could be monetized, it is also about users’ privacy.  Luckily I work for an organisation [Nokia] that takes consumer privacy very seriously.

You’ve got a fascinating history, and seem to have gotten into the Semantic Web at the very beginning.

The very, very beginning, yes.  I think I can argue that I’ve been doing this longer than the term has actually existed.

In ’96 I went to work at MIT…  I’d just been hired by Nokia, and they wanted to send somebody to MIT as a kind of visiting faculty member.   So, I worked in Tim Berners-Lee’s team, and one day he asked me what I thought was wrong with the web.

Tim Berners-Lee

Just a small question.

Yeah, not intimidating at all.

I said: “My hope has been to be able to build,” – what then would have been called agents, autonomous agents – and I said: “I can’t really do that because the web was built for humans and human consumption.  I would really, really like to see a web that was more amenable for consumption by automated systems.”

And he [Berners-Lee] said: “Yeah, that’s it! Now, how do we fix that?”

And I went: “Well, how about we try knowledge representation and apply that to web technologies.”  Because knowledge representation is a branch of artificial intelligence that has a long history of taking information and representing it in such a way that you can reason about it then draw conclusions from it… things like that.  We agreed that I would look into that, and that’s really how I got into all this.

Of course I had worked on various projects before that, that involved ontologies and knowledge representation, it just wasn’t done on the web.   The big reason being that the web had not really been invented yet.

There was Cyc and some other AI [Artificial Intelligence] things before that… 

Cyc is a very good example of an attempt to build a very large ontology that would encompass common sense knowledge.  But there are many examples of systems that used ontologies in one way or another for narrower domains.  Cyc was an overly ambitious project, in the sense that they really wanted to cover a lot of ground in terms of human knowledge.

I had worked on several projects in the past that applied ontologies to things like planning industrial production, or planning logistics.  So, the question really was, could you build a model of the world that was rich enough and precise enough that a system could use that knowledge to create plans for various things.  In my case those were plans for either how to run industrial production, or large fleets of logistics’ resources.

You were a long, long way in front of everybody else… at least ten years.  It’s incredible!

One might argue too far ahead.

I think at that time most people were just trying to come to grips with basic HTTP and web servers.  If you look at the vested interests, especially of software providers at that time… I guess it wasn’t really the right timing. But I think that time is coming now.

Yeah, I think we’re in a better position now and we’ve certainly seen a lot of adoption of Semantic Web technologies in the last few years.

I think elements of semantic are brilliant.   RDF, for example, is one of the smartest ways I’ve ever seen of describing something.  You can’t break the way semantics talks about something, whereas you can break the interpretation easily in XML.

I start to lose traction with it when it gets towards ontologies.  Do you think that ‘splitting the message’ would help with adoption?  For instance, you can use ontologies, but there is also a part of semantics which is brilliant for just doing ‘business as usual’?

I think there is a fairly broad spectrum of possible ways of making use of this technology.  I’m sure you’ve seen diagrams of the so called layer cake, with the different technologies layered on top of one another.

A Semantic Web Stack (layer cake) [image created by Tim Berners-Lee

I think that it’s up to you to decide how far up that layered structure you want to go.  There are a lot of applications where very simple use of just some of the most basic technologies will give you a lot of benefit.  And then there are other problems where you may actually want to separate a lot of the understanding of your domain from your actual executing code…  for those kinds of things, encapsulating that knowledge in the form of a potentially very complex ontology may be a good way to go.

My issue with ontologies is exactly the same issue I have with the current enterprise software providers… If you talk about mass adoption, as opposed to just specific domain adoption, for every extra entity – be it a class or data table – you decrease your adoption exponentially.   And, once you go up to higher levels, you shouldn’t assume you’re the only person that has a valid way of looking at the world, though you may be using the same data.  I think we’re saying the same thing…

Absolutely.  The interesting thing to say about the current enterprise software providers, I think, is that they have one model of the way to look at the world.   There are cases where companies have had to change the way they do business in order to adopt the enterprise software [currently available].

You have two choices: you either do it their way or else you spend a few million bucks and you do it their way anyhow.

I think that there is a possibility, with these Semantic Web technologies, of getting into more flexible uses of information and I kind of like that idea.

Over the last few years I’ve become increasingly interested in our ability to share information.  When you start talking about sharing it becomes really dangerous to have very complex, strictly defined semantics.  Because, like you said, other people might have a different interpretation of things.

But you want to nail some things down.  Understanding something about [the] information would give you a baseline for interoperating.  And then, you could do ‘better’ interoperation if you had a better definition of the meaning of the information.

I agree with you about understanding information.  But I think where most things fall to pieces – and this is also looking at business model languages and stuff – as soon as you get anywhere near processes with that information, it goes to hell pretty quickly. 

Exactly.  I spent a few years, at the beginning of the previous decade, working on a large Semantic Web research program funded by DARPA [Defense Advanced Research Projects Agency].  I was part of an effort to see if we could use ontological technologies to model web services.

Is that DAML and stuff like that?

Exactly; DAML, and DAML-S for services.  We very quickly got into process modeling; and those kinds of things get very difficult…

Very quickly.

Absolutely.  I think that’s the thing that still needs work.

The traditional approach to anything process-oriented just doesn’t work unless you have very tight coupling and a very controlled domain.  But I think there are a lot of different ways of trying to solve the same problem without having to get to that level.

I think that one of the things that is missing from the whole Semantic Web collection of specifications is this notion of action… a notion of behaviour.  It’s hard to model, but I think that we ought to work on that some more.

We [KimmiC/FlatWorld] have taken a more hybrid approach, so we use things like REST architecture, and a lot of stuff from the business world, in terms of authentication and authorisation. 

Sure.  I’m not in any way advocating the use of the WS_* collection of technologies. I’m not a big fan of those.

I’ve looked at all the SOAP stuff and there are a lot of problems… like business process deployment.  It is a nightmare to deploy these technologies.  It’s even more of a nightmare to load balance them.

Right.

Essentially, if you’re looking for dynamic relationships – be it in business or whatever – they’re just useless for that sort of thing.  They’re always designed around having control of a large domain space; this is especially true when it comes to deployment of applications.  I just think they’ve missed the point. 

I think the web is the best example of a redundant, massively-distributed application; and we need to look at it more as, “That’s the model,” and we have to work with it.

Absolutely.  I think that for 20 years there have been discussions about these sorts of ad hoc enterprises, or collections of smaller companies, being able to very quickly orchestrate themselves around a particular mission [purpose].  But I think that these technologies, just like you said, are probably not the right answer.

When you wrote your 2009 position paper you noted that rather than languages, the  biggest issues or problems facing the uptake of the Semantic Web were 1. Selling the idea; and 2.  A decent user interface.

Why did you feel that was the case then; and, has your opinion changed regarding these issues in the two+ years since you wrote your paper? 

Semantic Web technologies are well suited to situations where you cannot necessarily anticipate everything – say, about the conditions and context in which an application is used, or which kind of data an application might have available to it.  It is like saying that this is a technology for problems we are yet to articulate.  Sounds like a joke, but it isn’t, and the problem in ‘selling’ Semantic Web technologies is often about the fact that once a problem has been clearly articulated, there are many possible technologies that can be used to solve it.

The issue I have with user interfaces and the user experience is the following: Semantic Web technologies – or more generally, ‘ontological’ technologies – give us a way to represent information in a very expressive manner… that is, we can have rich models and representations of the world.  I feel that user interface technology has a hard time matching this expressiveness.  This issue is related to what I said earlier about not being able to anticipate all future situations; writing software that can handle unanticipated situations is hard.

All that said, I don’t like the term ‘Semantic Web applications’.  Users shouldn’t have to care, or need to know, that Semantic Web technologies were used.  These are just useful things in our toolbox when developing applications and services.

What are the key challenges that have to be solved to bring those two problems together?

I am really looking for new programming models and ways to add flexibility.  This is not only a technical problem, we also need to change how people think about software and application development.  I have no silver bullets here.

How do you see applications developing in the next few years – compared to the current environment – as you have mention we have to shift our minds from an application that ‘owns and controls’ it’s own data rather than simply interacting with data?

I think, again, this is about changing how people think about application development.  And, more specifically, I would like to see a shift towards data that carries with it some definition of its semantics.

This was one of the key ideas of the Semantic Web, that you could take some data, and if you did not understand it, there would be ‘clues’ in the data itself as to where to go to find what that data means.

As I see it, the semantics of some piece of data either come from the relationship this data has with other data – including some declarative, ‘machine-interpretable’ definition of this data, for example, an ontology – or are ‘hard-wired’ in the software that processes the data.  In my mind, the less we have the latter, and the more we have the former, the better.

In previous interviews you’ve noted that you feel users should have a say “in how they view  information.”  Do you think that users should become involved in making the semantic web more ‘usable’? And if so, how?

I think users should demand more.  There needs to be a clear ‘market need’ for more flexible ways of interacting with information.  User experience is a challenge.

On this topic, I also want to point out how unhappy I am with the modern notion of an ‘app’.  Many apps I have seen tend to merely encapsulate information that would be much better offered through the Web, allowing inter-linking of different content, etc. It kind of goes with what I said earlier about openness…

There’s a lot of guys saying they can plug two systems together easily, but it almost always means at the data level.   It doesn’t really work once you start applying context on top of it.

I’d like to see a middle ground where we have partial interoperability between systems, because that’s how humans interact.

That’s something we’re looking at as well.  I view it like this: when I go through Europe, I can speak a little bit of German, a little bit of French. I’m not very good, but I have to have a minimal level of semantic understanding to get what I want: to get a beer.  I don’t have to understand the language completely, just enough, in context, to act on it.

Speaking of acting on things… Ora, where are you going with semantics in the future?

That’s a good question. Right now I’m working on some problems of big data analytics.

With semantics?

Nokia is investing in large-scale analytics, so I’m in the middle of that right now.

I’m currently looking at how to tackle the problem of how to bootstrap behaviour.  Behaviour and notions of action are not well-tackled in the space of the Semantic Web, and I’d really like to get into bringing two information systems in contact with one another, and have them figure out how to interoperate.

That’s very ambitious.

Right.  And I’m not entirely sure if people understand that that’s an important question to tackle.

Oh, it’s an important question to tackle; it’s just more a question of… Again, you’re very far ahead of the game.

Well, I think that today, if you want to make systems A and B interoperate, it’s usually a large engineering undertaking.  So, it’s directly related to the question of separating information from applications…  you could pick the applications you like and take the information that you’re interested in and make something happen.  In terms of interoperating systems, right now we have a situation where we either have full interoperability, or we have nothing… we have no middle ground.

You can learn more about Ora via his website, blog and  Twitter feed.

[Kim, Michael and Ora Skyped from their homes in Boston and Sydney.]

[This interview has been translated into the Serbo-Croatian language by Jovana Milutinovich of Webhostinggeeks.com]

What’s so Fab about Fab Labs? : The Innovation Interview with ‘Collaboriginal’ Peter Troxler

Peter Troxler: Capital I Interview Series – Number 12

As I was traversing the flat world, which is LinkedIn, I came across Peter Troxler’s fascinating profile.  There were many things that intrigued me and instigated my reaching out and inviting him to take part in the Innovation Interview Series.  In particular, his research at the intersection of business administration, society and technology along with his expertise in applying the Internet and Web 2.0 technology to support the implementation of management systems.

When it comes to hats, General Secretary at International FabLab Association,  owner/director at p&s culture net and owner of the research company Square One are only three of the more than a dozen he currently wears.  That said, it was Peter’s perspective as a serial enabler, which prompted my invitation… that and his moniker, the ‘Collaboriginal’, earned due to his determination to enable and empower collaboration and innovation.

Peter, do you see a difference between ‘little i’ and ‘Capital I’ Innovation?

I have not made such a distinction so far and cannot think of a real need to differentiate the two.  However, I think there is a big difference between invention and innovation, and the terms are often confused — even in public dialogue and ‘innovation’ awards that pretend to award innovation, but often just give money to people who are inventors so they can start to sell their inventions.

Maybe it could help to explain what I understand when I say innovation:

  • innovation = putting a new idea into practice, continuously improving it interactively with the customers (or similar) beyond a singular prototype (and in a minimal profitable way, say ‘ramen profitable
  • invention = prototypical, singular realization of a new idea
  • new idea = a principle for a new product, service or practice (without actually realizing it, not even as a prototype)

All of these need not be ‘globally’ new;  innovation, in particular, can be completely local.

What do you see as the main barriers to the success of innovation?

  1. The sit-and-wait-mentality of many inventors, who think that a good idea alone (or maybe with a prototype) will convince ‘entrepreneurs’ to take it and run with it.
  2. The idea that an ‘innovation’ has to be global.
  3. The belief that every innovation has to have a website 😉
  4. The belief that people ‘have to’ buy into an innovative product or service just because it is an ‘innovation’ (and has got an award that proves it).
  5. The belief that innovation is finished when its results are put into practice (or that there is a predictable way those results will take off and develop).

How essential has innovation been in your career?
Innovation has played a key role in many professional activities I’ve been involved in — be it in the arts, academic research or business.  In many ways I’ve been involved in making innovation happen.

I’m not a serial entrepreneur; but I’d like to see myself as a serial innovation enabler.  My passion is, together with others, to put new ideas into practice and to grow them beyond singular prototypes.  When the innovation makes the transition to routine, I lose interest.

You’re one of the three founders of p&s culture net, can you tell me a bit about why you started it?

In the late nineties I started p&s culture net together with two friends in Switzerland.  We set out to investigate what the impact of the internet would be on literature.

It began with workshops and small events, and grew into quite a substantial business, doing quite large public events in Switzerland.  We work around literature and try to make literature accessible in new ways.

And who, typically, would be involved in your workshops?

We try to get a good mix of people including researchers, academics, philosophers, historians, artists and even engineers.  We use the workshops as a kind of ‘think tank’.  For instance, when we were investigating what the internet does to literature, literary production, and literary consumption, it was extremely helpful to have all these different types peoples around the table.

I’m sure. And did your investigation come up with an answer as to what effect the internet has on literature?

No, not really.  When I think back on that particular series… it [the internet] just creates so many new ways to work with text.  And that text, and writing, are still very important and very relevant skills.

You are also the General Secretary of the International FabLab Association, which you’ve been involved with since 2007.  What is FabLab?

FabLab stands for Fabrication Laboratory.  It’s a concept that was developed at MIT by a physics professor, Neil Gershenfeld, about ten years ago while he was investigating how to create self-replicating matter.

Neil uses all sorts of machinery to do his experiments; and created the ‘How to Make Almost Anything’ course, which has been over-subscribed ever since it started.

People learn to use digital machinery like laser cutters, milling machines and 3D printers.  There’s a standard set of, relatively, simple machines that make up a FabLab.

FabLab image courtesy of Arnold Roosch

The set-up is relatively easy to use, so, about ten years ago Neil started to set up FabLabs in third-world countries, and in deprived areas in cities such as Boston, to give people instruments to play with and make their own stuff.

The idea took off, and suddenly everybody wanted to have a FabLab.  Currently there are over 70 FabLabs in the world… on almost every continent.  I think Australia and New Zealand are just waking up to the idea, but there are FabLabs all over Europe, in Africa, South America, Russia and a few in Japan.

How are FabLabs being used developing nations in particular?

They are used to make everyday stuff.  There is a beautiful project in Africa… They buy lamps from China, which would run on batteries and have conventional light bulbs in them.  The lamps are disassembled in the FabLab, the conventional light bulbs are replaced with LEDs, the batteries are replaced and LED cells are added.  Now the lamps can be charged by sunlight and are sold on.

This sounds like the implementation of innovation.

Yes.

You have spoken in the past about businesses exploring and using open source and open innovation. How have found businesses reacting to the idea?

Businesses are extremely nervous about it.  Business owners have been told, “You have to protect your ideas. It’s dangerous to share your ideas because anybody could pick them up, run with them and make big bucks, while you starve to death because you haven’t protected your IP.”

However, if you look at it more closely, the first thing you notice is that protecting your IP is extremely expensive and time consuming, and it distracts you from the real purpose of business, which is making money… there is so much time and money spent waiting for bureaucrats to file applications.

The other thing you notice is that there’s this axiom ‘IP protection helps innovation;’ so people think, “If there is no IP protection, there will be no innovation.”  There’s absolutely no proof of that.

There is no empirical evidence that IP protection helps to grow businesses, except probably in two sectors… one is a no brainer, the lawyers.  And the other, though I haven’t looked at it very closely, is the pharmaceutical sector.  If you get counterfeit medicine, which doesn’t do what it says on the tin, that’s an obvious problem.

Are you attempting to convince business that they should be exploring the ‘Open’ option?

I am indeed. I’m working with various people, from the industrial design corner of the world, to really look into the issue and find ways for designers to make a living in an open source context.

Square One seems to sit in a very interesting niche, at the intersection of business, administration, society and technology.  How would you use technologies such as Web 2.0 and 3.0 to support the implementation of management systems.

The intersection of business, administration, society and technology in the whole context of open source, open innovation… it’s huge!  It’s massive!  What I’m trying to do is break it down and apply it in very specific contexts.  Currently the main context I am working in is the FabLab context, because they refined everything.

FabLab is the thing that is socially relevant.  They are open to the general public and are, obviously, technology based.  But, I would say, half of the FabLabs existing right now are struggling to find sustainable business models.

They’re being set up with subsidies of some kind, which helps them run for a couple of years.  Since we have such a massive growth in the number of labs – the number is doubling every 12 to 18 months – there is no real experience of the post-subsidy period.  Because many of the people who set-up FabLabs are enthusiastic tech people, they don’t have the kind of business understanding that would enable them to set up such an animal to survive long term.  That’s the specific area of complication where I try to bring all those aspects together.

Where do you think the FabLabs movement will be in ten years?

That’s a very interesting question. Because ten years is quite a long period, if we look at the technology we’re dealing with.  It could be that by the time certain machines are cheaply available the raw material won’t be.  It’s kind of hard to predict.

The other thing is that, currently, FabLabs are mainly either community based initiatives, or school/university based initiatives.  But, that said, we’re seeing things happening in France at the moment where large ‘teach yourself ’ chain stores are starting to jump on the train and say, “Hey! You know, we could attach a FabLab to our stores.”

What kind of stores?

DIY stores, for instance. You’d buy material, then go next door to the FabLab and build something. It makes complete sense.

FabLab image courtesy of Arnold Roosch

But then, on the other side, you’ve got the community based FabLabs who are thinking, “Whoa! This is not the way we think about FabLabs.”

The classic clash between corporate and communal.  Do you have an opinion as to which is a better t?

I would have to imagine a world where both exist side by side.  Not everybody would go to a community-run FabLab and wish to have this type of community. But, it makes so much sense to produce a lot of stuff yourself.

Maybe I am oversimplifying, but it seems to me that you can go and brew your own beer… many people do.  Personally I prefer to go to the store and buy it, but that’s me.  I think there’s room for all of us who like beer to do whatever is most comfortable.  Here’s hoping that FabLabs become as ubiquitous as beer!

Speaking of which – and yes, I’m segueing from beer to thoughts of Holland – there is a FabLab opening today in Rotterdam, isn’t there.

Yes, and its happened really quickly [though not as quickly as Fablab Amersfoort, which was set up in 7 days – here’s how]. They got the financing to do it in May and the first iteration opened in September. Today it opens for real.

To move this fast, we had to bring all sorts of concepts together: open source, the unconference, the possibilities and mentality of the internet – where sharing suddenly is much more easily achievable – and rapid prototyping. It’s a completely different approach to the more conventional control mentality approach.

It’s an entirely new ecosystem… it’s research AND development – rather than the more common research THEN development.

P: Yup.

Well harkening back to my beer analogy… Cheers to that!

(Kim and Peter Skype’d from their homes in Sydney and Rotterdam.)

The NBN: Are We Bothered… and Should We Be?

KimmiC chats with Australian technology journalist, author and speaker Brad Howarth about the National Broadband Network (NBN) and business innovation.

Brad HowarthCapital I Interview Series – Number 11

Brad Howarth is a journalist, author and speaker with more than 15 years  experience in roles which include marketing and technology editor at  business magazine BRW, and technology writer for The Australian.  His first book, ‘Innovation and Emerging Markets‘, was a study of entrepreneurial Australian technology companies and the process of commercialising technology globally.  ‘A Faster Future‘ his second, is co-authored, with Janelle Ledwidge. In it they investigate the future of broadband applications and services and the impact these may have on business, society and individuals.

The recently published Australian Industry Group National CEO Report: Business Investment in New Technologies’ noted that forty five percent of responding CEOs, from a wide range of businesses, said they lacked the skills and capabilities necessary to take advantage of the NBN.  This  has  risen from twenty percent just three years ago.

What are  your thoughts on this situation; and do you think that their solution of staff training and hiring will solve the problem?

The hiring and training of staff will only solve the problem if we also train our leaders to recognise the problem in the first place and instil in them both a desire and capability to do something about it.  I’m regularly receiving feedback that suggests that Australian businesses are struggling to image their future in a high-speed broadband world, and that’s almost regardless of whether the NBN is completed or not.  All too often short term issues such as the economy or carbon tax are getting in the way.

It’s not a great outcome to focus all your attention on managing the impact of the carbon tax to find your business has been undermined by new competitors coming in on the Internet, or find that too many of your customers have changed behaviour and no longer need you.  Australian businesses need to spend a lot more time thinking about their broadband futures – and there is huge scope for innovation once they start doing so.

You wrote your second book, ‘A Faster Future’ at a very interesting time last year, as there was a great amount of debate around broadband connectivity. 

The original idea was conceived not long after the announcement of the Fibre to the Home National Broadband Network model.  It was initially written to explore the uses of high-speed broadband and then evolved into a much broader picture.

It was very interesting to watch the debate going on in Australia, about the need – or otherwise – for high speed connection.

If you go into the rural area the debate goes away very quickly.  The debate’s been driven along primarily party political lines I suspect; so, it has more to do with posturing.

Are you saying that rurally it’s taken as a given that the NBN is coming, and it will be a positive thing?

Most regional councillors, have fallen over themselves to get their particular part of the world hooked up to broadband network.  Now whether they know exactly what to do when they get it is a little bit harder to say.  But certainly – given that so much of what we take for granted in the metropolitan regions isn’t available in regional areas – they’ll get benefits from simply bringing them up to speed.

Do you see much potential for (onshore) Australian companies to get involved in the actual building of the NBN and share in the potential profits; or will it mainly be offshore internationals who come in and make the most out of it?

In phase one, which is equipment supply and roll-out, you’re obviously seeing contracts awarded to Alcatel-Lucent and others for a lot of developments on the network…  the fibre cable itself.  Mainly because Australian companies don’t make that stuff.  To the best of my knowledge, we don’t have a company here that makes GPON equipment, so there is really no choice there.

When it comes to the deployment, I think you’ll find there’s a lot of Australian involvement in the roll-out and deployment of the network itself.  And certainly it’s going to employ a huge number of people.

Think of the phases that come over the top of that, and that’s where there’s a lot more potential for Australian businesses to get involved.  Australian businesses in the entrepreneurial technology space will build the applications and services that run over the top of the network.  And of course there is the opportunity for Australian businesses of any variety to create service offerings that utilise the network.

It may be that initially, of 30 odd billion dollars spent, a part of that may go to foreign manufacturers; but that’s really not a significant component of the actual network itself.

It has been suggested that, re the NBN Broadband, and the Government’s National Digital Economy Strategy 2020, many of the initiatives proposed could already be undertaken successfully with current infrastructure.  What is your opinion?

From that perspective, it is already possible to drive from Sydney to Melbourne, so clearly building airports is a waste of money.  Restricting investment to the goals of the Digital Economy Strategy represents short term thinking.  This is about investing for the long term.

In ‘A Faster Future’ you say that it may take a few years for a killer app for broadband to show up, but as things move so much faster now, could it not be right around the corner?

It could.  Actually, it is probably already here, we just don’t know it yet.

Where are you hoping to find the next true innovation?

The area that I think is obviously apparent today is interface technology – devices like the Kinect, the Wii, and so on – which enable us to interact with machinery through mechanisms other than our fingers.  That’s already there, but I think it’s got a long way to go.

You’ve seen derivations of that… the whole gesture based computing paradigm that’s emerging through the iPad and various other capture devices.  It is basically building a whole new language.

I think if you’re looking for ‘Capital I’ Innovation, look at a company like Emotiv and the work it’s doing with the EPOC headset and using brainwaves to become a control and measurement mechanism.

That’s Tan Le, isn’t it?

Yeah.  I think what she and the team are doing probably does represent ‘Capital I’ innovation.  Yes, it is an Innovation within a stream, but then the invention of the automobile was the extension of transport; the invention of electricity was an extension of power.  So, I think that’s definitely one.   I’m not really certain that they are too many others out there at the moment.

If you take broadband internet, I think there’s so much ‘small i’ innovation to come on that platform.  We don’t need to build faster-than-light telecommunication systems based on paired photon matching because we’ve got a truck-load of work to do with the technology that we have already developed.

But I think you might see some ‘Capital I’ innovation eventually come off of broadband… you saw a glimmer of that in Second Life.  Combine the notions of high definition with spatial realism in terms of video and audio, and possibly even haptic sensory input, then you’ve got a new platform there.

Once we’ve moved to a system where you can do direct stimulation of the brain itself, so you can actually start inputting data directly into the brain and bypassing the senses – going past the nose, eyes and ears, maybe even the fingers – in terms of haptic responses, then we’ll find a new platform for ‘Capital I’ innovation.  The Emotiv stuff is going in another direction.  It’s taking what’s in the brain and putting it back out to the world.  I think we’re only at the earliest stages of being able to see anything there.

But there is so much work to be done with what we’ve already got.  You could stop fundamental research tomorrow and innovation would continue for a very long period of time – although I wouldn’t advocate that.  We haven’t even really harnessed all the capabilities of electricity yet.

Australia has a proud tradition of adapting, some may say Innovating, healthcare to match the geographical barriers and vast remoteness of the continent.  Since 1928 the Royal Flying Doctors Service has serviced those living in rural, remote and regional areas of Australia, utilising the technologies of motorised flight and radio.  Is there enough emphasis on continuing this legacy via the potential of Broadband?

There is a lot, but there should be a lot more.  Actually, that’s one of the areas of the government where promotion of the National Broadband Network is incredibly strong on.  I think the argument for the NBN economically could be made almost entirely on the benefits to the health sector.

That said, I get the feeling that you may think innovation is an overused term.

Absolutely.  I’ve written about innovation for ages. I put innovation in the title of my first book; but I’m sick of the word.  Not so much the word itself, just sick of how it’s being used.  We seem to have these analytic discussions about the need for innovation, we have innovation conferences… it’s just endless.  I think what disappoints me about innovation is the amount time we spend talking about it as opposed to the amount of time we spend doing it.

Passive innovation, sitting around navel-gazing… it’s something that we probably focus too much on as opposed to finding practical methods for implementing Innovation.  I want companies to actually get on with it.  We need to start injecting the theory into business practice.

You can learn more about Brad, ‘A Faster Future’ and his other writing via his  website and follow him on Twitter.