Editor : Martin Simamora, S.IP |Martin Simamora Press

Senin, 04 Juli 2011

World Economic Forum proposes new measurements for e-gov

Every two years, governments around the world wait for the UN e-government rankingsto see how their efforts to make progress in e-government compare with others. Earlier this month, the World Economic Forum (WEF) released a report called The Future of Government which looked at the different factors used by each measurement instrument and identified some shortcomings.

In addition to the UN report , there are rankings produced by Cap Gemini for the [European Commission](http://www.capgemini.com/news-and-events/news/continued-improvement-in-european-egovernment-services/ and by Waseda University Earlier annual surveys conducted by Accenture and Brown University have now been discontinued.
Three extracts from the WEF report illustrate the general conclusions with relation to the current rankings: “Measures and indicators developed in the 1990s for e-government readiness and e-government do not sufficiently reflect the new realities”
The new dynamics of global competition, environmental challenges, financial reform and emerging global norms regarding privacy, surveillance, cyber-security and more require the development of measures and indicators that reflect the realities of networked governance”
“The Council recommends that measures be developed to reflect and support networked governance, citizen engagement, innovation, agility and other dimensions of the future of government”

For the measurement of e-government progress, the report concludes that the emphasis should shift from the “supply side” measures to ones that more accurately reflect the citizen experience. None of the current surveys use criteria that measure citizen satisfaction, and the report highlights three tools that are widely used to measure satisfaction – web analytics, customer views and customer experience replication. These are being applied by some governments, but there is not yet any international consensus on how these measures could be applied.

These conclusions build on discussions that have been taking place over the last year both in the OECD and the World Bank.

What is the purpose of surveys that measure e-government progress? They respond to a basic human need to know how well we are doing compared to others. They also highlight countries that are successful and which can be used as an example of good practice. Each survey adds a unique methodology, a set of data that is collected through observation of and inspection of e-government websites.
But as the report notes, they have limitations. The direct relevance of telecommunications infrastructure on e-government performance is less than when the UN survey started in 2011. This can create anomalies, where the ranking of countries with relatively low scores for e-services is boosted by high uptake of PCs, internet, phone. mobile and broadband.

The need for longitudinal continuity constrains the extent to which these instruments can adapt to the changing environment. All recognise the limitations and opportunities to improve, and are adapting, looking for a range of new indicators to measure mobile government services, sustainability uptake of services, and specific e-services relating to life events.

I would add another dimension that came out of a discussion last month with the project leader of the EU survey - the speed of e-government progress. How can the experience of one government be used to accelerate the speed of implementation in another? Too often we focus on the differences between a case study and our own situation; far better to take the lessons and apply them – avoid making the same mistakes, and implement service innovation and e-government transformation faster. The ability to learn from the work of others will be a factor that separates out the successful government leaders in the future.

It is clear that governments that focus on improving citizen experience will deliver better services. Will this lead to improved rankings? This will need the rankings to adapt – and the demand-side measures suggested by the WEF report are much more difficult to collect, and much more difficult to standardise across diverse countries and cultures.
When you are faced with the results of your country, it is natural to be proud when your efforts have resulted in an improved ranking; if your ranking has fallen, it is tempting to scrutinise the methodology and identify reasons why some of the measures misrepresent your effort.

In fact, the real value in the surveys is not in the headline rankings, but in the analysis and assessment of what has been successful in individual countries. This is what you should look at - to identify how you can improve your service delivery to citizens. As government officials, our mission is not to score well on league tables, it is to provide the highest possible quality of services to citizens and businesses.

.futuregov.asia

Tidak ada komentar:

Corruption Perceptions Index 2018

Why China is building islands in the South China Sea

INDONESIA NEW CAPITAL CITY

World Economic Forum : Smart Grids Explained

Berita Terbaru


Get Widget