Towards Automated eGovernment Monitoring

Paper A - Benchmarking e-government. A comparative review of three international benchmarking studies.

Author: Morten Goodwin

This chapter was originally published at Third International Conference on Digital Society 2009. Please see the original source for the complete paper.1

  

Original paper authors: Lasse Berntzen, and Morten Goodwin Olsen

Lasse Berntzen is with Vestfold University Collect, Faculty of Social Science, Tonsberg, Norway lasse.berntzen@hive.no

Morten Goodwin Olsen is with Tingtun AS, Kirkekleiva 1, 4790 Lillesand, Norway, email: morten.goodwin@tingtun.no

Abstract

This paper makes a range of comparisons between eGovernment developments and performance worldwide. In order to make such comparisons, it is necessary to use a set of indicators. This paper examines the evolution of indicators used by three widely referenced international e-government studies, from the early days of e-government benchmarking until today. Some critical remarks related to the current state-of-the-art are given. The authors conclude that all three studies have their strengths and weaknesses, and propose automatic assessment of e-government services as a potential solution to some of the problems experienced by current benchmarking studies.

Keywords: e-government, benchmarking, indicators

Introduction

Electronic government is the use of technology to provide electronic services to citizens, businesses and organizations. Through such services, users can interact with government independent of time or physical location. For the government, provision of electronic services is effective, since most users serve themselves.

As electronic government has developed, benchmarking studies have been initiated to show how different countries compare to each other.

Benchmarking is a technique for comparing e-government performance, and is normally based on a set of indicators. Such indicators are used to calculate some kind of performance index. The performance index can be used to rank different governments or government agencies against each other.

A framework is a set of assumptions, concepts, values, and practices that constitutes a way of viewing reality. A framework may be used to derive a set of indicators suitable for assessment and evaluation. Figure 1 shows one popular framework for e-government service maturity [1].

The vertical axis represents complexity, while the horizontal axis represents level of integration. The boxes represent different levels of maturity: catalogue level, transaction level, vertical integration level and horizontal integration level. This framework can be used to assess the maturity of different e-government services.

Furthermore, benchmarking can help e-government development, by drawing attention to best practice elsewhere. The results of such benchmarking are often used in e-government strategy and planning processes.

Example of e-government maturity framework.

 

Research scope and related research

The scope of this research is to compare three wellknown, widely referenced, international e-government surveys and identify strengths and weaknesses of their respective methodologies. All three surveys have been repeated several times. As e-government services have matured, the methodologies have been expanded and refined.

First, this paper examines the evolution of the three benchmarking studies with respect to methodologies. Next, we compare the methodologies used, and discuss some problems with the current benchmarking efforts. Finally we introduce an ongoing research project aiming to overcome some of the challenges identified.

Related research

Other researchers have studied e-government benchmarking from other perspectives. Janssen et al. [2] analyzed 18 e-government benchmarking studies. Most of these were one-time, local studies. The research team grouped indicators into five different

categories:

The authors concluded that output and environmental indicators were most common in these benchmarking studies. These observations are also valid for the three benchmarking studies analyzed in this paper. Ojo et al. [3] examined the same three international benchmarking studies used in this paper, and identified a set of core indicators for assessing e-government readiness. Their research focused on e-readiness, the capability to implement e-government solutions, and proposed a modified set of indicators for this particular purpose. Other researchers like Germanakos et al. [4] studied benchmarking of e-government services provided by cities.

Major Benchmarking Studies

There are currently three widely published international

benchmarking studies:

Accenture

Accenture has published its e-government surveys every year since 2000. Data collection is done by local Accenture employees in the countries surveyed. Although the majority are European, the survey also includes non-European

countries. As shown in table 1, the survey reports have changed title every year, to indicate the current state of e-government development and implementation in the surveyed countries.

Accenture has consistently included a description of methodology used, either as a separate chapter or anappendix. The last report, however, provides fewer details than earlier reports.

In 2001 [5] the ranking of countries was based on two indicator sets: service maturity and delivery maturity. Service maturity was calculated from the number of services implemented and their level of completeness using a framework not very different from the one described in the introduction. The maturity levels used were: publish, interact and transact. The first level, publish, signifies that information is available online. The next level, interact, signifies that citizens are able to submit information online, e.g., application forms. The government does not need to respond electronically. The third level, transact, signifies that the government responds electronically to information submitted.

Delivery maturity covers selected delivery aspects, such as single point of entry, design by customer intentions, CRM techniques, portal capabilities etc. In the overall e-government maturity index, service maturity was weighted 70%, while delivery maturity was weighted 30%. The resulting index was used to group countries into innovative leaders, visionary followers, steady achievers and platform builders.

Table 1 shows how the Accenture surveys targeted more services and more sectors each year until 2006. In 2002 [6], delivery maturity was substituted by customer relationship management (CRM). Customer relationship management covers five aspects: Insight, interaction, organization performance, customer offerings and networks.

Service maturity was still weighted 70%, while CRM was weighted 30%. The resulting index was used to group countries into innovative leaders, visionary challengers, emerging performers and platform builders.

In 2004 [8] a new indicator, support, was included in the CRM to measure integration with other channels (e.g., call center) to assist citizens in finding information and complete transactions.

Accenture Surveys
YearTitleCountriesServicesSectors
2000 Implementing e-government: Rhetoric or Reality N/A N/A N/A
2001 E-government leadership: Rhetoric vs. Reality 22 165 9
2002 e-government Leadership: Realizing the Vision 23 169 9
2003 e-government Leadership: Engaging the Customer 22 201 11
2004 e-government Leadership: High Performance, Maximum Value 22 206 12
2005 Leadership in Customer Service: New Expectations, New Experiences 22 177 12
2006 Leadership in Customer Service: Building Trust - - -
2007 Leadership in Customer Service: Delivering on the Promise 22 N/A N/A

The 2005 survey [9] introduced some major changes in methodology and measurements, by including more facets of service delivery. The index was built from two

components, each weighted 50%. The first component, service maturity, was the same as in earlier surveys. The second component, customer service maturity, measured how well governments addressed the four dimensions of customer service: citizen-centered, multichannel, cross-government service delivery and proactive communication about the services towards users. For the first time, citizens were asked their opinions as part of the data collection process. 400 citizens in each country (600 in U.S.A) were asked a series of 15 questions about their attitudes toward their countries' service delivery program, their use of different service channels and their interests in particular services. This practice has been continued in later surveys as shown in table 1interviews.

In 2006 Accenture decided to temporarily drop the ranking of individual countries [10]. Instead the study focused on those countries that have performed consistently well during the previous surveys, in order to give examples of best practice.

In addition, in-depth interviews were conducted with 46 high-ranking government executives. The research was also extended by a survey of 8600 citizens as explained above. The 2007 survey [7] introduced a new indicator set, citizen voice, weighted 40%, to incorporate results from citizen interviews in the ranking. The weighting of the service maturity component was reduced to 10%. The reason for this reduction was explained in the following way [11]: ``This element of our rankings has decreased in importance over the years as e-government has become increasingly ubiquitous and less of a differentiator among countries''. The service maturity component included the same aspects as in 2005, and was weighted 50%. Interviews with high-ranking government executives were continued and used as input to the general discussion about eGovernment maturity.

Data Collection Through Interviews (Accenture)
YearCountriesInterviews
200412 5000
200522 9000
200621 8600
2007- -
200822 N/A

Brown University

Brown University has published a series of eGovernment surveys [12],[13],[14],[15],[16],[17],[18],[19] since 2001. In 2001 the survey examined 2288 web sites in 198

countries. The Brown University surveys examine a broad range of public web sites.

Among the sites analyzed are those of executive offices (such as a president, prime minister, ruler, party leader, or royalty), legislative offices (such as Congress, Parliament, or People's Assemblies), judicial offices (such as major national courts), Cabinet offices, and major agencies serving crucial functions of government, such as health, human services, taxation, education, interior, economic development, administration, natural resources, foreign affairs, foreign investment, transportation, military, tourism, and business regulation. Each web site has been evaluated for the presence of 28 specific features dealing with information availability, service delivery, and public access. Features assessed included type of site, name of nation, region of the world, office phone number, office address, online publications, online database, external links to non-governmental sites, audio clips, video clips, non-native languages or foreign language translation, commercial advertising, user payments or fees, subject index, handicap access, privacy policy, security features, presence of online services, number of different services, links to a government services portal, digital signatures, credit card payments, email address, search capability, comment form or chat-room, broadcast of events, automatic email updates, and having an English version of the web site.

The survey only examined presence of features, with no efforts to measure maturity or depth of individual services. Observations were done by native speaking researchers and in some cases by automatic translation tools. Since 2001 the survey has been repeated, using the same research methodologies. Table 3 shows the number of services and countries assessed each year.

Brown University Surveys
YearNo. of countriesTotal no. of web sites
20011982288
20021981197
20031982166
20041981935
20051981796
20061981782
20071981687
2008198N/A

In 2002, a simple e-mail responsiveness test wasperformed. After 2002, there have been some changes of features checked, like PDA accessibility.

UN Department of Economic and Social Affairs

The United Nations Department of Economic and Social Affairs (UNDESA) published its first e-government survey in 2002. From 2004 the title was changed to e-government readiness report. In 2008 the title has reverted back to eGovernment survey. All reports give a detailed description of research methodology used.

The 2002 benchmarking study [20] was done as a collaborative effort between the UN Division of Public Economy and Public Affairs (UNPEPA) of the UN Department of Economic and Social Affairs (UNDESA) and The American Society for Public Administration (ASPA). The study examined government web sites of all 190 UN member states. At that time 169 member state governments showed web presence, while 84 member states had a national government web site. Two methodologies were used. First, web sites were checked for content and services likely to be used by citizens. The results were then used to derive an index showing the sophistication of government web sites. Second, statistical information was collected on infrastructure and human capital. The resulting eGovernment index was used to rank the member states. The study was primarily based on observation, but background information was also collected through on-site visits, interviews and questionnaires.

Countries were assessed using the following criteria:

The research is well documented, including the actual assessment form for government web sites and formulas for deriving the different indicators.

The 2003 e-government survey [21] assessed the 191 UN member states based on the same methodologies as in 2002. The Web Measure Index was revised and enhanced, and used a five stage model to assess sophistication of services. The five stages used were: emerging presence, enhanced presence, interactive presence, transactional presence and networked presence.

The web measure assessments are purely quantitative, and were based on a questionnaire that required researchers to assign a binary value to the indicator based on presence/absence of specific electronic facilities/services available.

The 2003 survey, however, also included a qualitative assessment of e-participation. A total of 21 public and participatory services and facilities were investigated across the 191 member states. A scale of 0-4 was used to assess the individual service. The 2004 e-government survey [22] was based on the same methodology used in 2003. The 2005 e-government survey [23] was based on the same methodologies as in 2004 with some enhancements.

The study included supplemental research on disability access of national web sites, using the tool WebXACT to measure compliance with current accessibility standards. The 2008 e-government survey [24] was based on the same methodology as the 2003 and 2004 surveys. Some modifications were made to enhance the rigorousness of the methodology.

Comparison and Discussion

Reproducibility

Reproducibility is one of the cornerstones of the scientific method, and refers to the ability of a test to be accurately reproduced, or replicated, by someone else working independently Prerequisites for reproducibility are proper documentation of methodology and access to primary data used in deriving the results.

All three benchmarking studies include description of methodology used, but descriptions show variations in quality. Accenture provides no detailed list of services investigated and their maturity scores. Therefore, the calculation of the indexes is not reproducible. The lack of detailed breakdown and assessment of services causes at least two problems. It is impossible to evaluate the relevance of the services selected, and it is also impossible to submit corrections. While the methodology has been described in detail in earlier reports, the last report does not include number of services examined, number of sectors included and number of citizens interviewed.

The Brown University survey describes categories of web sites searched for, and features examined. The list of web sites visited is not available. Again, it is not possible to reproduce results based on information published. Since this data is not available, it is not possible to submit corrections.

Recently, the UN launched a knowledge database (http://www.unpan.org/egovkb) to enable researchers and government officials to access the background information for the survey. UN also publishes a detailed description of their research methodology. By combining the methodology with data from the knowledge database, it is possible to reproduce results.

Observation

All three studies are based on observations. Accenture is using local employees. The Brown University survey examines almost 200 different countries, with a large variety of languages. At least, in recent years the research team claims to have employed foreign language readers to evaluate foreign language web sites. No description is given of how the research team maintains consistency when a large number of evaluators examine rather small samples. The UN reports lists research team and translators used, which makes the data collection process more trustworthy.

Qualitative assessments

As a supplement to quantitative indicators, Accenture is also doing qualitative assessments based on knowledge of the individual countries, and such information is given in individual country sub-reports. Accenture also started interviewing citizens in 2005. However, these results were not incorporated as an indicator set until 2007. Some of the qualitative remarks attack the problem of web content (what users see) vs. back office solutions (what users do not see).

National scope

All three surveys target national services only, excluding services provided by lower levels of government. Accenture specifically mentions that they do not penalize services delivered on lower levels.

Other issues

In contrast to the two other benchmarking studies, the UN surveys focus on additional aspects such as infrastructure and human capital, and also include indicators for assessment of citizen participation.

Challenges related to current indicators

One problem with the benchmarking studies described above is that they are supply oriented. All surveys count services, and assess their sophistication. The services are not evaluated based on their usage or impact, as seen from the citizen point of view. Only Accenture have used interviews to find out what citizens think of services.

These results have only recently influenced the final index. Another problem is that all three studies target electronic services at national level. In practice, many services are the responsibility of lower levels of government. Even if such services are removed from the analysis, this introduces a considerable source of error in the assessments. The later Accenture reports have discussed this as a problem. All studies are primarily based on observation. Observation does not reveal what is behind the facade. A service may be poorly integrated with back-office systems, and still get a high score. Another system with less functionality may be well integrated with back-office systems and get a low score. Assessment of e-government should certainly also take into consideration the level of integration. As earlier stated, Accenture does a qualitative assessment including such aspects in their descriptions of the individual countries.

Observation requires fluency in the languages used. A large number of observers may also cause problems, since observers may have different conceptions of what they observe. In countries with more than one language, observations should include all languages. Accenture have solved this by using local staff in each country being studied. UN has at least some transparency regarding use of experts for local assessments.

A relevant question is what services to include. All three benchmarking studies use a checklist approach to look for existence of certain e-government services, but such checklists need to be transparent in order to evaluate relevance. A survey based on a checklist of possible services does not take into consideration the relevance of each service in relation to local circumstances, so a detailed breakdown of results should therefore be given. UN is providing such breakdown through their recently launched knowledge base. Observers may be unable to uncover ``hidden'' services. One example is when the government sends the webaddress together with a user name and a password to its citizens to enable access to an application requiring some kind of authentication.

Another problem emerges from changes in research methodology. When services are added or removed from examination, the results are no longer consistent with earlier years. The UN studies assess e-participation, but their assessment raises some important questions of what should be measured. From the description of the research methodology, the UN examines the government web site for evidence of participatory applications, but is this the right place to look for participation? For countries based on indirect democracy, it would be more natural to see how citizens participate towards the parliament, not the government itself.

None of the three studies provide designated feedback channels to report on incorrect information.

The future: Automated Assessment

Governments are introducing new e-government services every day and benchmarking is an important mechanism for keeping track of developments and a source for identifying best practices, but as the number of e-government services increases, data collection becomes more challenging. We have earlier discussed the problem of benchmarking studies only targeting national e-government services, but an expansion of benchmarking studies to include services provided at lower organizational levels, e.g., municipalities, using the traditional approach of observation, would be a very challenging and resource intensive task.

An ongoing project, eGovMon [25], co-funded by the Research Council of Norway, aims to partially automate assessment of e-government services. Performing parts of the evaluation automatically frees up resources and allows for a much wider range of web sites to be part of the evaluation. In practice, this approach allows web sites at all levels to be part of the evaluation, and additionally, it facilitates the presentation of results at more frequent intervals compared to traditional studies.

However, automated tests cannot evaluate with the same level of detail as manual evaluations. Therefore, automated assessment will be supplemented by expert evaluations using a specialized tool for manual verification of collected information. The project will also include a feedback channel giving service provides the possibility to submit The eGovMon project builds on experiences from a recently completed EU project to develop an online observatory for automatically collecting information on accessibility [26],[27]

Conclusion

This paper has discussed major e-government benchmarking studies. The three studies use different methodologies and indicators. The methodologies used have evolved over time. All benchmarking studies have become more complex, by collecting information from multiple sources and by expanding the number of indicators being used for assessment.

Still, there are several problems related to the use of indicators, and some important issues are not covered by these benchmarking studies, e.g., accessibility, transparency, efficiency and impact. Automated assessment may be one strategy to improve future assessments.

Acknowledgements

This work is part of the eGovMon project http://www.egovmon.no/, co-funded by the Norwegian Council of Research under the VERDIKT program. Project

no.: Verdikt 183392/S10.

Footnotes

1. [The paper has been published in the Proceedings of the Third International Conference on Digital Society 2009: 77-82. IEEE© 2009]

Bibliography

[1] Developing fully functional E-government: A four stage model,Layne, K. and Lee, J.,Government information quarterly,122--136,2001,18,2,0740-624X

[2] If you measure it they will score: An assessment of international eGovernment benchmarking,Janssen, D. and Rotthier, S. and Snijkers, K.,Information Polity,121--130,2004,9,3-4

[3] Determining Progress Towards e-Government-What are the Core Indicators?,Ojo, A. and Janowski, T. and Estevez, E.,313,2005,Proceedings of the 5th European Conference on e-Government (ECEG 2005),Academic Conferences Limited,1905305001

[4] Towards the Definition of an e-government Benchmarking Status Methodology,Germanakos, P. and Christodoulou, E. and Samaras, G.,179--188,2006,Proceedings of the 6th European Conference on e-Government,Academic Conferences Limited

[5] E-government Leadership: Rhetoric vs. Reality Closing the Gap.,Accenture,2001,Retrieved April 12th, 2011, from http://www.accenture.com/SiteCollectionDocuments/PDF/2001FullReport.pdf

[6] E-government Leadership: Realizing the Vision.,Accenture,2002,Retrieved April 12th, 2011, from http://www.accenture.com/us-en/Pages/insight-egovernment-report-2002-summary.aspx

[7] E-government Leadership: Engaging the Customer.,Accenture,2003,Retrieved April 12th, 2011, from http://www.accenture.com/us-en/Pages/insight-egovernment-2003-summary.aspx

[8] E-government Leadership: High Performance, Maximum Value.,Accenture,2004,Retrieved April 12th, 2011, from http://www.accenture.com/us-en/Pages/insight-egovernment-maturity-2004-summary.aspx

[9] Leadership in Customer Service: New Expectations, New Experiences.,Accenture,2005,Retrieved April 12th, 2011, from http://www.accenture.com/us-en/Pages/insight-leadership-customer-service-2005-summary.aspx

[10] Leadership in Customer Service: Building the Trust.,Accenture,2006,Retrieved April 12th, 2011, from http://www.accenture.com/us-en/Pages/insight-public-leadership-customer-service-building- trust-summary.aspx

[11] Leadership in Customer Service: Delivering on the Promise.,Accenture,2007,Retrieved April 12th, 2011, from http://www.accenture.com/us-en/Pages/insight-public-leadership-customer-service-delivering- promise.aspx

[12] WMRC global e-government survey, October, 2001,West, D.M.,Taubman Center for Public Policy, Brown University,2001

[13] Global e-government, 2002,West, D.M.,Center for Public Policy, Brown University, at http://www.InsidePolitics.org/egovt02int. PDF,2002

[14] Global e-government, 2003,West, D.M.,Brown University Report,2003

[15] Global E-government, 2004,West, D.M.,Retrieved March,2004,23

[16] Global e-government, 2005,West, D.M.,Center for Public Policy, Brown University, Providence, RI http://www. insidepolitics. org/egovt05int. pdf [accessed 21 June 2006],2005

[17] Global e-government, 2006,West, D.M. and others,Providence, RI: Brow University,2006

[18] Global e-government, 2007,West, D.M.,2007

[19] Improving technology utilization in electronic government around the world, 2008,West, D.M. and Brookings Institution. Governance Studies,2008

[20] Benchmarking E-Government: A Global Perspective: Assessing the Progress of the UN Member States,Ronaghan, S.A. and American Society for Public Administration and United Nations. Division for Public Economics and Public Administration,2002

[21] Global E-Government Survey 2003, E-government at the Crossroads,United Nations Department of Economic and Social Affairs,Retrieved March 8th, 2010, from http://unpan1.un.org/intradoc/groups/public/documents/un/unpan016066.pdf,December

[22] Global E-Government Readiness Report 2004, Towards Access for Opportunity,United Nations Department of Economic and Social Affairs,Retrieved March 8th, 2010, from http://unpan1.un.org/intradoc/groups/public/documents/un/unpan019207.pdf,December

[23] Global E-Government Readiness Report 2005, From E-Government to E-Inclusion,United Nations Department of Economic and Social Affairs,Retrieved March 8th, 2010, from http://unpan1.un.org/intradoc/groups/public/documents/un/unpan021888.pdf,December

[24] Global E-Government Survey 2008, From E-Government to Connected Governance,United Nations Department of Economic and Social Affairs,Retrieved March 8th, 2010, from http://www2.unpan.org/egovkb/global_reports/08report.htm,December

[25] Towards Automatic Assessment of e-Government Services,Berntzen, L. and Snaprud, M. and Sawicka, A. and Flak, LS,2007

[26] A quality inspection method to evaluate e-government sites,Garcia, A.C.B. and Maciel, C. and Pinto, F.B.,Electronic Government,198--209,2005

[27] Evaluating global e-government sites: a view using web diagnostic tools,Choudrie, J. and Ghinea, G. and Weerakkody, V.,Electronic Journal of E-government,105--114,2004,2,2

The author of this document is:
Morten Goodwin
E-mail address is:
morten.goodwin ASCII 64 uia.no
Phone is:
+47 95 24 86 79