Posts filed under ‘strategy’

The next best thing to the next best thing

From the perspective of a person keen to see identity federation the norm, a single federation protocol is the best thing. That allows a focus on the real challenges of federation- the business and process challenges. It relegates arcane discussions about SAML and WS-Federation to the few people who really want to talk about the nuts and bolts.

In reality, that’s probably unachievable. If nothing else, that was the biggest lesson from the ODF vs. OOXML saga.

The next best thing is true interoperability between protocols with standard products supporting multiple protocols out of the box. This doesn’t take away all the costs, complexity, and risks but is still an acceptable outcome.

The next best thing to the next best thing is a major vendor promising to move towards the next best thing. To that end, Microsoft’s announcement that the beta version of Geneva will not only support SAML 2.0 as a token format but also as a single sign-on protocol is very welcome. Geneva is Microsoft’s future identity platform, replacing ADFS (Active Directory Federation Services).

Specifically, Geneva will support the SAML 2.0 Lite/Web SSO profile. Happily enough, it will also support the US Government’s GSA profile which seems to be an attractive offering for US Government agencies.

So, come 2010 or whatever the usual announcement-to-real world deployment cycle takes, deployers of federation can increasingly focus on benefiting from identity portability rather than the underlying technical challenges.


October 30, 2008 at 12:11 am Leave a comment

Semantic Web & OAuth

I must confess that for a long time I never got this semantic web thing. Now, with the zeal of the recently converted, I see possibilities everywhere.

Part of the reason it took time was an automatic reaction against something being called Web 3.0 (or is it 4.0?). I’m still trying to really understand Web 2.0. Learning about the next big thing could always wait.

Another reason was how early enthusiasts described the semantic web. Calling it the machine readable web doesn’t even begin to make sense.

As far back as 1999, Tim Berners-Lee in Weaving the Web said, “I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.”

Now that’s visionary. Even today, I’m barely beginning to understand that vision.

Thankfully, and perhaps ironically, the very Web 2.0 service Slideshare has some presentations that explain things in a way that we mere mortals can understand. My first pick are the two presentations from Freek Bijl- the first one covers the basics and the second one the technologies. Another one is from Marta Strikland called The Evolution of Web 3.0. This has a great Web 3.0 Meme Map on slide 15 and a comparative list of Web 2.0 and 3.0 on slide 27.

Being more of a graphics person, the final aha came from the one below, thanks to Project 10x (also worth looking at is the original Semantic Social Computing presentation from Mills Davis).

With the semantic web also comes a whole new set of acronyms. A starter list is RDF, SPARQL, SWRL, XFN, OWL, and OAuth. In particular, OAuth being the authentication one is interesting.

OAuth is described as “An open protocol to allow secure API authentication in a simple and standard method from desktop and web applications.” The basic promise is attractive- access to data while still protecting the account credentials. That has the advantage of not requiring people to give up their usernames and passwords to get access to their data. OAuth is a much-improved version of closed proprietary protocols such as Flickr’s API. Importantly, it has support for non-browser access such as desktop applications and mobile services.

So, what are the practical applications of the semantic web? Within the government space, a clear winner is being able to automate the collection of data from multiple government websites and search, filter, or otherwise manipulate the result.

As a simple example, if all government websites had the contact details of their media contact using hCard, it would be easy to have an always up-to-date list that can be displayed, indexed, searched, loaded into an address book, mapped, etc. Even as a relatively simple first step, this would be a big step forward for government.

September 3, 2008 at 11:43 pm Leave a comment

Just what is ‘identity’?

As a term that most of us find intuitively easy to define, it turns out that getting a precise and generally accepted definition of the term ‘identity’ is far from easy.

The first question of course is whether it’s even worth the effort to try and get a precise definition. I think the answer is ‘yes’ for several reasons.

First, identity involves personal information and people expect that government collects and holds their personal information in a secure manner with their privacy appropriately protected.

Secondly, people need to prove who they are many times during a day. While typically people only need to do that with government infrequently, for a government agency it is of critical everyday importance to have confidence in the identity of the person they are dealing with. For example, an agency needs to be sure that government services are being delivered to the right person. Another example is ensuring that the right person has access to their own personal information such as health records or tax records.

On the one hand, people want convenient access to their information and government services. On the other hand, government as a whole has to manage the identity-related risks and ensure that the taxpayer’s money is spent well.

Finally, consider this quote from a recent report by Sir James Crosby to the UK Government, “… those countries with the most effective ID assurance systems and infrastructure will enjoy economic and social advantage, and those without will miss an opportunity. There is a clear virtuous circle. The ease and confidence with which individuals can assert their identity improves economic efficiency and social cohesion…”.

Looking around, both in New Zealand and overseas, we saw that most of the focus on ‘digital identity’ and ‘user-centric identity’. Also, ‘identity management’ is typically defined in technology terms such as ‘authentication’ and ‘authorisation’. And yet, all of these still don’t answer the fundamental question of just what ‘identity’ is in the first place.

To help get us a better insight into the thinking of the academic world and the approaches taken in some other countries, we turned to Victoria University of Wellington. Professor Miriam Lips, with the help of her student Chiky Pang, has now completed her report Identity Management in Information Age Government (PDF, 557 KB) and we have published it on the e-government website.

It turns out that the answer to our questions has a variety of answers. However, it does validate our current approach that one of the useful ways to look at identity is to consider that people have a single, unique identity but many context-dependent partial identities or personas. The result is more of an onion than linear, so that operating at the outer layers of the onion may not have any connection at all with the unique core:

Another interesting insight from the report is the move to an informational definition of identity from a document-based definition. The impact of the Information Age is to make it increasingly necessary for governments to consider identity information- its collection, verification, storage, maintenance, and disposal- rather than just the issue and use of identity documents.

As we look at these issues in finer and finer detail, it remains important to not lose sight of the basics. Such as, people own and control their own identity while government’s role is to manage their identity information well. And, the need to put theory into practice.

So that in the future, when Bill and Jessica want to return home to New Zealand, they have one less thing to worry about.

[Original post at]

July 9, 2008 at 7:52 pm Leave a comment

Authenticating the Queen’s subjects

I’m just back from attending eGovernment 2008 in Canberra. For me, the big draw was an opportunity to attend a three hour workshop focussed on the UK’s Government Gateway. I sure wasn’t disappointed- the insights into the Government Gateway were quite an eye opener.

Attending the conference also led me to reflect on how online authentication is working for the Queen’s subjects in the UK, Australia, and New Zealand. It’s quite fascinating how each of them reflect diverse approaches and are also very much a product of their times.

First, Australia. Still very PKI focussed, as in standard X.509 certs in the user’s computer. There are some good intentions from the federal policy body AGIMO (Australian Government Information Management Office) to move on to solutions that work for people (not computers) but the mindset of the average government official is definitely digital certs.

A good example of this focus is the success of VANguard. VANguard’s authentication service is probably best described as an authentication broker whose main function is to allow for interoperability of digital certs issued by various CAs. This is a good step so that businesses (it’s mostly business-focussed) can use the same digital cert with multiple RPs. It’s a back-end hub so that various front-ends and portals, such as bizgate in South Australia, can draw on its functionality. Still, it has all the limitations inherent in the old PKI designs.

It’ll be interesting to see how AGIMO’s proposed National e-Authentication Framework will differ from their existing AGAF (Australian Government e-Authentication Framework) which is separate for businesses and individuals.

Back to the UK’s Government Gateway. From the outside, so much of the focus has been on the UK’s plans for a national identity card that people, including me, can’t distinguish the good stuff they have done and are continuing to do in the online authentication space from the bad. Jim Purves, Head of Product Strategy in the Cabinet Office gave terrific insights into the chequered history of the Gateway as well as plans going forward.

The Gateway is very privacy-protective, very focussed on providing authentication and SSO for the UK Government’s online services. They are introducing SAML 2 soon but that also has the downside of continued support for all the current protocols. They’ve had some significant funding challenges in the past but now have “strategic investors” from within government so the future is bright. Trust and confidence in the Gateway is at an all-time high.

Purely speculative on my part but I think they’ve got a big cloud on the horizon- when the national identity card folks come calling. That could potentially lead to a fundamental change in approach. That’s the unfortunate steamrolling impact of the national identity card. Also interesting how they handle pan-European interoperability but, with a strong Liberty Alliance foundation, I imagine they are well placed to handle that.

So, how does NZ stack up? The proper comparison is with the GLS or Government Logon Service (which will be re-branded igovt later this year). There’s no doubt that the GLS is the most privacy-protective of the lot and has all the right moving bits.

Once the IVS or Identity Verification Service and then GOAAMS or Government Online Attribute Assertion Meta System is added to igovt, then it’s a whole new ballgame for NZ.

But, there is clearly one area that the GLS should look at- adding a web services (ID-WSF) capability in addition to the current browser re-direct (ID-FF). That will provide many new opportunities off the same infrastructure, such as acting as an authenticating receiver for XML messages. The UK’s Government Gateway currently does that for all electronic tax filings direct from standard tax and accounting packages.

All in all, interesting times and much thinking…

July 2, 2008 at 11:45 pm 1 comment

Freeing the cyber seas

Thoughts of war have been on mind recently. The seduction of using force to achieve just outcomes. The futility of war, in many cases, failing to make a lasting difference in addressing the root cause.

The US had Memorial Day, a day of remembrance for military men and women who laid down their lives. Over here, NZ has Tribute08, a time for the country to say sorry to our Vietnam Vets and welcome them home after decades.

The price of war shows up in various ways, with neither side spared. An example is the 100+ US soldiers who commit suicide each year. Or, the continuing unwillingness in NZ to really face up to the damage that Agent Orange continues to do to Kiwi Vietnam Vets and their families.

That’s the mindset with which I read the article, Freedom of the Cyber Seas, recently.

It takes us back to the late 18th century, when the Barbary States ruled the Mediterranean- seizing cargo from those vessels not protected by the European powers; extorting ransom from those that had not paid the ‘protection fee.’ For the newly independent America, the policy was to appease the pirates. By 1786, Barbary extortion demands totalled $1 million- one-tenth of the U.S. government’s entire budget at the time.

Thomas Jefferson was a proponent of Dutch jurist Hugo Grotius’ Mare Liberum or “free seas” doctrine published in 1609. Once Thomas Jefferson became President in 1801, true to his words, he sent in a group of American warships. Four years later, culminating in the Battle of Derna, the Barbary States were defeated and “free access to the world’s oceans a fundamental component of U.S. sovereignty” was established.

The authors’ purpose is of course not to give us a history lesson. Rather, it is to draw a parallel with “a new version of the high seas–the cyber seas” that threatens US military and economic interests. They call on the US to abandon the policy of appeasement to keep data flowing through global networks without hindrance.

Fortunately, they aren’t advocating what the US Air Force does, “America needs a network that can project power by building an robot network (botnet)… America needs the ability to carpet bomb in cyberspace to create the deterrent we lack.” They thankfully think that respecting international law is a good thing and recommend “policies, legal frameworks and enforcement mechanisms for Internet commerce and communications.”

Their plan is however not without a hard edge. Inspired by the US war on drugs, “the president also must charge an appropriate federal organization with the charter of patrolling the cyber seas–issuing challenges where necessary and taking proactive defensive action to disrupt organized threats. This organization must work closely with the law enforcement and intelligence communities to identify bad actors and devise strategies to exploit the vulnerabilities associated with online criminal activity.”

Even though this is a very US-centric view of the world, it does raise some interesting thoughts and parallels. What is the world going to do about the modern-day pirates? What is the Internet equivalent of the war with the Barbary States (today’s Russia and Eastern Europe)?

And, finally, the sobering thought that piracy on the high seas was not wiped out by a US victory in the Battle of Derna. Far from it as anyone familiar with piracy in the Malacca Straits.

So, what are we going to do? And will there be a lasting solution?

June 1, 2008 at 10:28 pm 1 comment

Why igovt?

For some time now, we’ve been aware of a paradox: we are building and operating user-centric services but use government-centric language to describe them. The launch of the igovt website is a small yet important step towards changing that.

Take the Government Logon Service (GLS) as an example. According to our website, which is intended for a government agencies audience, “In a nutshell, the GLS is an all-of-government shared service to manage the logon process for online services of participating agencies.”

The very name, description, and use of a Three Letter Acronym are so government-centric. What does an average person, say a student who just wants to check his/her account online, make of this? Do we really want to try and explain to people what a “logon” is?

There is of course logic in using government-centric language, especially in the early days of a new service for which there are few, if any, precedents and mental models. Describing as accurately as possible what a service does from a functional perspective allows for precision. It helps external experts and interest groups get an in-depth understanding of what the service does and, sometimes more importantly, what it doesn’t.

But it is more than choice of language alone. It’s also about perspective.

Protecting privacy has been a major driver for the all-of-government authentication services. An important way of designing in privacy is the separation of who a person is (identity) from what they do online (activities) so that data aggregation and building profiles of people aren’t possible. Two different government departments operate two different services based on their respective strengths.

This world-leading approach has been highly acclaimed by privacy experts. Yet, from the view of a person or organisation interested in getting better and quicker government services, it just means more complexity that they have to try and understand and overcome to get to what they are really interested in- the service they want.

The second issue therefore is that people don’t want to integrate and coordinate government’s services; they want government to do that. This desire is reflected at a strategic level in the Development Goals for the State Services. At an everyday level, it means that we had to find a way for our privacy-protective design to be presented to people as a single, integrated online service without diluting the design itself.

And, it was apparent that the time to act was now, before the Identity Verification Service was launched and before future authentication services further increased complexity.

The result is igovt. It is not “just another brand” but, over time, will represent a significant shift. A shift to using user-centric language; a shift to government integrating multiple online services from multiple government agencies for people without any dilution of security and privacy protection; a shift to making it easier and more convenient for people and organisations to get government’s services.

Though there are many models we can learn from, there aren’t any tried and trusted models that we can simply adopt. It is therefore neither possible nor appropriate to try and make the shift in one giant leap. Instead, it’s more of a journey from inside-out thinking to outside-in, learning along the way.

The next step in this journey is to re-brand and re-describe GLS as the first igovt service.

[Original post at]

April 23, 2008 at 10:02 pm Leave a comment

Me, My Spouse and the Internet

It’s become a bit of a worn cliché to say that the Internet is changing everything. Many things are obvious- from the read-write web to social networking to online transacting.

But there are also less obvious, more tectonic shifts happening. These are slow societal shifts that will ultimately change the shape of society itself. These deep changes are not readily apparent amongst the constant shrill of everyday headlines. Nevertheless, they are happening- every day, all the time, in imperceptible increments- leading to fundamental shifts stretching over years.

So it was with interest (and with a vested interest) that I read the results of the survey results from the UK’s Oxford Internet Institute as a part of a project called Me, My Spouse and the Internet. As the Institute’s Director said, “This study is a dramatic illustration of the potential for the Internet to reconfigure social relationships.”

The results from the study show the role played by the Internet in the relationships of a representative sample of over 2,000 married Internet users in UK. Some highlights include:

1. 20% of married Internet users admitted to reading their partner’s emails and text messages; 13% to having checked their partner’s browser history.

2. 6% of married Internet users first met their partner online. Just over a third of these were through an online dating site. People meeting future partners online had greater education and age gaps.

3. Face-to-face communication was (still) the most reported way for married Internet users to discuss personal matters and resolve problems but other channels were also used, including text messaging (27% of users), and email (14% of users).

4. Disclosing a partner’s intimate details and other shady online activities got a big thumbs down from partners.

Hmmm… there doesn’t seem to be anything about what married Internet users think about their partner’s blogging activity yet. Or if there any blogging widows out there. That’s a sign for me to move on…

April 9, 2008 at 11:21 pm 6 comments

Older Posts

This blog is no longer updated. See the About page for more info. I'm currently active on Twitter.

Follow me on twitter