Waikato University crest

Department of
Computer Science
Tari Rorohiko

Computing and Mathematical Sciences

2010 Seminars

Events Index

A climate of distrust: sceptics and scientists on the blogosphere

David Nichols
Department of Computer Science, University of Waikato
Wednesday 15 December, 2010
This talk describes the evolution of the online climate sceptic community and its interactions with climate scientists. One aspect of the disagreement between these two groups has been access to the data and source code used in climate research. In practice, some of the sceptics have embodied a form of Open Science: that all data and code of all published research should be available to all. The talk will examine the sceptics' approach and consider the implications for digital data curation.

 

On solving large scale supervisory control problems

Martin Fabian
Tuesday 7 December, 2010
DES (Discrete Event Systems) is a mathematical modeling formalism useful for modeling typically man-made "reactive" systems such as manufacturing, traffic control, and embedded systems. A DES occupies at each time instant a single "state" out many possible ones, and transits to another state on the occurrence of an "event". The SCT (Supervisory Control Theory), formulated by Ramadge and Wonham in the mid-eighties, is a general theory for the automatic calculation of control functions for DES. From a DES (called the "plant") modeling the system to be controlled and another DES (the "specification") modeling the desired behavior, a "supervisor" can be computed. This supervisor is such that through interaction with the plant, it dynamically restricts plant events from occurring, so as to keep the plant within the desired specification. In this way, the supervisor is a safety device that effects control in a way similar to that of a traffic policeman; certain activities (events) are hindered from occurring so as to guarantee the safety of the system. The main obstacle for the SCT is what is known as the "state-space explosion" problem. Systems of practical interest exhibit enormous number of states; an existing model of the central-locking system of a modern car encompasses roughly 10^8 states; a model of a small manufacturing system can encompass 10^10 states. The state-space explosion problem makes the straightforward calculation a supervisor intractable in practice, and so some "intelligent" algorithms have to be designed to defeat the problem. This seminar aims to give the basics of the SCT and show some examples of its application. The focus will be on handling large-scale systems, presenting some approaches that are currently being researched and that have shown promising results.

 

eXtreme Research? What might it look like?

Michael B Twidale
Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign, USA
Friday 3 December, 2010
Various attempts have been made to speed up the software development process, allowing for faster iterations and earlier discovery and rectification of problems. Currently, eXtreme Programming and Agile programming are popular approaches. How might we , inspired by these approaches, develop new ways of doing research better and faster? Can we figure out new approaches, particularly to tackling messy complicated problems which seem to have too many variables? Can this help us to get to grips with the complexities of how people are grappling with many different technologies, integrating them into their lives, coping, appropriating and innovating, changing what they do, and constantly adopting new applications and new versions of existing applications and infrastructures? How might we do innovative interface analysis and design better in a context of rapid change, combination and appropriation? I want to explore these questions looking at some examples involving mashup programming, patchwork prototyping and low cost visualizations.

 

Wikis and museums

Jonathan P. Bowen
Museophile Limited, United Kingdom
Tuesday 23 November, 2010
The potential for the use of wikis by museums to aid collaboration between their various users is great. Wikis allow a virtual community to maintain and update information in a cooperative and convenient manner. This talk presents the use of wiki tools and facilities that are available online and suitable for use by museums for web-based collaboration and the building of virtual communities. The use of the leading wiki on the web, Wikipedia, is covered in a museum context. Use of external wiki facilities such as those provided by Wikia, a free community of wikis, is also presented. A selection of existing examples of museum-related wikis is surveyed, including comments on their features in a wider context. One of the examples has been examined in the framework of a Community of Practice (CoP). Some lessons from experience of wiki use by museums so far are given and the possible future of wikis, especially with respect to museums, is considered.

 

The industrial use of formal methods: experiences of an optimist

Jonathan P. Bowen
Museophile Limited, United Kingdom
Tuesday 16 November, 2010
11:00 am
G.1.15
Formal methods aim to apply mathematically-based techniques to the development of computer-based systems, especially at the specification level, but also down to the implementation level. This aids early detection and avoidance of errors through increased understanding. It is also beneficial for more rigorous testing coverage. This talk presents the use of formal methods on a real project. The Z notation has been used to specify a large-scale high integrity system to aid in air traffic control. The system has been implemented directly from the Z specification using SPARK Ada, an annotated subset of the Ada programming language that includes assertions and tool support for proofs. The Z specification has been used to direct the testing of the software through additional test design documents using tables and fragments of Z. In addition, Mathematica has been used as a test oracle for algorithmic aspects of the system. In summary, formal methods can be used successfully in all phases of the lifecycle for a large software project with suitably trained engineers, despite limited tool support.

 

There's something about Lean

Laura Bocock
Department of Computer Science, The University of Waikato
Friday 22 October, 2010
One of the ideas gaining significant interest at the moment is “how can we apply lean manufacturing principles to software development?” There is however, limited information on how Lean is being used by those at the coal face of software development and if the principles and practices are as effective as current interest levels suggest it might be. We have used grounded theory to explore the practicalities of how one high performing, open source team has adopted Lean practices. We found that existing meritocratic culture of the team under study appears to have greatly assisted the team’s application of Lean principles to their software development processes.

 

Analysis and mining of transcriptome data

Greg Butler
Computer Science and Engineering, Concordia University, Montreal
Thursday 21 October, 2010
An introduction to the typical steps in analysing and mining data about levels of gene expression from transcription, illustrated using short-read RNA-Seq sequences from fungi. We will pose some outstanding issues.

Steps:

    a) data generation and collection
    b) normalization across genes and samples
    c) clustering of genes via expression profiles
    d) gene set enrichment analysis (GSEA) for understanding clusters
    e) reconstruction of metabolic pathways
    f) reconstruction of transcription regulatory module networks

 

Data mining methods for predictive toxicology

Stefan Kramer
Department of Computer Science, Technical University of Munich, Munich
Tuesday 19 October, 2010
The prediction of toxic effects of chemicals (predictive toxicology) is of great scientific, political, and commercial concern. In the talk, I will give an overview of the data mining methods and tools that we developed for this area. I will discuss how those methods can be made scalable, how structural clustering can be used to obtain local models of high quality, how conditional density estimation can be used to quantify the uncertainty in the predictions and, finally, how suitable distance measures can be derived from data.

 

Learning real-time automata from multi-attribute event logs

Stefan Kramer
Department of Computer Science, Technical University of Munich, Munich
Tuesday 12 October, 2010
Network structures often arise as descriptions of complex temporal phenomena in science and industry. Popular representation formalisms include Petri nets and (timed) automata. In process mining, the induction of Petri net models from event logs has been studied extensively. Less attention, however, has been paid to the induction of (timed) automata outside the field of grammatical inference. In the talk, I will present work on the induction of timed automata and show how they can be learned from multi-attribute event logs. I will present the learning method in some detail and give examples of network inference from synthetic, medical and biological data.

 

Why we need better digital ink recognizers

Beryl Plimmer
Friday 1 October, 2010
The coming-of-age of stylus input has been prophesised for many years. Yet it lurks on the sidelines – while touch phones have taken the world by storm. Why is this? I will argue that the missing link is more powerful digital ink recognizers – and, as an aside, making ink a first class data type. We are working hard to building better recognizers. I will describe the current state of Rata, our platform for creating and evaluating recognizers. Of particular interest to you is that we use Weka to build our recognizer models.

 

Eno-Humanas: an introduction to research at GRC, Auckland University of Technology

Subana Shanmuganathan
School of Information Technologies, University of Sydney, Australia
Tuesday 14 September, 2010
The talk is aimed at introducing the audience to AUT’s Geoinformatics Research Centre (GRC), its staff, activities and recent research efforts in anticipation of identifying and building capacity for future collaborative research. The Centre that was launched in August 2007 with only three members has today grown into a much bigger group with 14 members that include soil scientists, an aquaculturist and a petro-chemical engineer. Presently, most of the GRC members are full-time researchers, post-graduate students, and the rest are associates from universities and private sector partner institutions, from all around the world; majority of the associates are from New Zealand, Chile, Japan, Uruguay, Argentina and USA (http://www.geo-informatics.org).

Most of the current research efforts are centred around an umbrella theme called “Eno-Humanas” which is focused on building models to unravel a good old say “what makes a year good for wine”. More recently, there has been significant interest in understanding scientifically the French notion of a centuries-old Latin term “Terroir” and applying its core concepts to contemporary grapevine cultivation and winemaking. On the other hand, similar to any other industry, New Zealand Winegrowers Association as well has to make timely informed decisions to sustain competition from other “new world” wine producing countries for which we believe modern research is vital. In view of these facts, some of the Eno-Humanas projects look into ways and means of gathering/ capturing and analysing digital data for gaining insight into the interrelationships between sets of independent (environmental and climate), dependent (plant physiology, growth, phenology, crop yield and wine taste) and “clutiva” (or varietal) variables using diverse approaches, models and techniques, such as rigorous statistical, artificial neural network computational, machine learning and data/text mining methods as well as cellular automation for simulation research.

EnoMetrica (a wireless sensor network of telemetry devices designed and being implemented for the acquisition, storage, transmission and live web display of data on environmental and microclimate conditions along with plant response captured simultaneously, using modern day technologies, such as cloud computing, wireless remote sensing and RFID transmission), frost prediction, web text mining of sommelier comments, modelling climate effects on vineyard yield and grape wine quality and vineyard yield simulation are some of the major areas that are currently being researched in partnership with grapevine growers, winemakers, institutions and telemetry device designers. Lately, we have designed and built our very own sensor technology that could be used for monitoring change in micro climatic conditions, such as environmental, atmospheric and soil, as well as plant response, within orchards and horticulture plantations. The other major projects include: authorship authentication, e-health (integrated healthcare information systems), ecological dynamics modelling, geospatial data analysis and data/text mining clinical records.

 

Efficient statistical parsing with combinatory categorial grammar

James Curran
School of Information Technologies, University of Sydney, Australia
Tuesday 17 August, 2010
Natural language parsing has changed dramatically in the last 15 years. Hand-crafted grammars with low ambiguity have made way for 111wide-coverage grammars derived from annotated treebanks. These grammars are extremely ambiguous and require statistical models to select the best parse. Context free grammars are also being replaced by more expressive lexicalised grammar formalisms.

This talk will introduce one of these more expressive formalisms, Combinatory Categorial Grammar (CCG; Steedman, 2000) and describe the recent work Stephen Clark and I have been doing on statistical parsing for CCG. Our surprising result is that while CCG is more complex to parse than context-free grammars, the result is the fastest wide-coverage yet linguistically-motivated parser in the world.

The key idea is that lexicalised parsing can be split into a linear-time "supertagging" phase which reduces ambiguity followed by the full parsing algorithm. It turns out that this supertagging phase can be trained to predict the parser's decisions, further increasing parsing speed.

 

Journals in Semantic Land: The visualization of a large-scale research article collection for use in search refinement

Glen Newton
Carleton University, Ottawa, Canada
Friday 6 August, 2010
We examine the scalability and validity of semantically mapping (visualizing) journals in a large scale (5.7+ million) science, technology and medical article digital library. This work is part of a larger research effort to evaluate semantic journal and article mapping for search query results refinement and visual contextualization in a large scale digital library.

In this work the Semantic Vectors software package is parallelized and evaluated to create semantic distances between 2365 journals, from the sum of their full-text. This is used to create a journal semantic map whose production does scale and whose results are comparable to other maps of the scientific literature.

 

The Totalisator – the algorithm that led to an industry

Bob Doran
Department of Computer Science, Auckland University, NZ
Tuesday 3rd August, 2010
Almost 100 years ago, at their Ellerslie Easter meeting in 1913, the Auckland Racing Club set operating the world’s first automatic totalisator - a truly-enormous computing machine. This talk describes the developments that led to the totalisator - how the simple pari-mutuel algorithm invented by Joseph Oller in the 1860s gave rise to a world-wide industry devoted to its execution. Along the way we will look into workings of the computing machines designed to assist the totalisator, particularly the first machine at Ellerslie, and the special buildings, dotted around the NZ countryside, used to house the totalisator operation.

 

Spatial data support in Greenstone and context-aware mobile systems

Wendy Osborn
Department of Mathematics and Computer Science, University of Lethbridge, Alberta, Canada
Friday 30th July, 2010
In this two-part talk, I will present my work in spatial data management, with a focus on two application areas: Greenstone and context-aware mobile systems.

A context-aware mobile system provides a user with information based on different contexts. For example, the system TIP (Tourist Information Provider) continuously provides the user with up-to-date information based on contexts as location, personal preferences, browsing history, etc. In addition to information stored in TIP, access to external sources of information, such as Greenstone digital library collections, is desirable. My focus is on managing location context using a spatial index. Specifically, I look at the following: 1) access to both TIP information and information in external repositories in a uniform manner, and 2) efficient navigation of the spatial index so that up-to-date information is provided continuously to the user.

Greenstone is a software suite for building and publishing digital libraries collections. One of Greenstone's many features is its support for searching and browsing on various metadata such as title, creator, etc. It addition, it would be desirable for Greenstone to support searching and browsing on location - for example, browsing documents based on their origin on a world map. I will present work in progress on incorporating spatial data into Greenstone, including: 1) metadata representation, 2) importing gazetteer data, and 3) building and storing a spatial index. I will also present how this work potentially complements recent developments from the Greenstone Lab (Bainbridge et al. 2010) which visualizes documents from a collection on a world map.

 

The algorithmics of solitaire-like games

Roland Backhouse
School of Computer Science, University of Nottingham, UK
Wednesday 28th July, 2010
Puzzles and games have been used for centuries to nurture problem-solving skills. Although often presented as isolated brain-teasers, the desire to know how to win makes games ideal examples for teaching algorithmic problem solving. With this in mind, this paper explores one-person solitaire-like games.

The key to understanding solutions to solitaire-like games is the identification of invariant properties of polynomial arithmetic. We demonstrate this via three case studies: solitaire itself, tiling problems and a collection of novel one-person games. The known classification of states of the game of (peg) solitaire into 16 equivalence classes is used to introduce the relevance of polynomial arithmetic. Then we give a novel algebraic formulation of the solution to a class of tiling problems. Finally, we introduce an infinite class of challenging one-person games inspired by earlier work by Chen and Backhouse on the relation between cyclotomic polynomials and generalisations of the seven-trees-in-one type isomorphism. We show how to derive algorithms to solve these games.

 

Algorithmic problem solving

Roland Backhouse
School of Computer Science, University of Nottingham, UK
Tuesday 27th July, 2010
"Algorithmic Problem Solving" is a first-year, first-semester module which is compulsory for Computer Science students at the University of Nottingham (and optional for other students) [http://www.cs.nott.ac.uk/~rcb/G51APS/G51APS.html]. As the title suggests, the module aims to teach problem-solving skills with a particular focus on skills relevant to algorithm design. The module has been running for 7 years and is replicated at the University's campuses in China and Malaysia. I am currently preparing a book with the same title that will be published in the near future by John Wiley.

The approach to teaching problem-solving skills in the module is problem-driven. Easily understood but nevertheless challenging problems are presented and discussed in an order designed to systematically introduce algorithmic techniques. Relevant mathematical skills are introduced as and when appropriate.

This talk will present some examples of the problems that are used in the module and discuss the algorithmic-problem-solving skills that are emphasised.

 

Preserving African cultural heritage

Hussein Suleman
Computer Science, Department, University of Cape Town, South Africa
Tuesday 6th July, 2010
The digital preservation of cultural heritage is a hot topic in many parts of the world, including Africa. African archivists, however, often deal with tight constraints on resources and socio-cultural factors that have a defining and substantial effect on the design of solutions. This talk will present motivating arguments for the preservation of African cultural heritage and describe technical approaches to address these, with emphasis on a key case study (the Bleek and Lloyd Collection) and associated ongoing work.

 

Coordination language S-Net

Alex Shafarenko
Department of Computer Science, University of Hertfordshire
Tuesday 25th May, 2010
The work I would like to present focuses on the coordination language S-Net, which is a new coordination language developed by us and based on the idea that using controlled nondeterminism the topology of a stream-processing network can be radically simplified. In fact it simplifies down to Single Input Single Output (SISO) nodes and SISO-to-SISO combinators, allowing the program to be represented as an algebraic formula similar in appearance to one written in Kleene's algebra. The compiler is able to reconstruct the multiple connections between nodes from the formula and to avoid SISO inefficiencies, but due to SISO the expressive power of the language and its modularity is dramatically enhanced. In particular, compositionality can be approached on the basis of a type system, which in the case of S-Net has been made rather flexible. It includes subtyping and offers new forms of abstraction, encapsulation and inheritance that seem very suitable for the stream-computing paradigm and which could also be viewed as a new form of OOP. Another useful feature of S-Net is that it is fully asynchronous and consequently has the ability to hide communication latency in a distributed implementation.

The language has been implemented and extensively researched in a large, recently completed EU-funded project involving among others the Imperial College, London, the University of Amsterdam as well as a major industrial partner. There are already some performance results and experiences of applying S-Net to real-life problems. More details are available from the project Web site snet home.org and the published work cited therein.

 

Advances in time-of-flight 3D range imaging

Adrian Dorrington
The University of Waikato, Hamilton
Tuesday 18th May, 2010
Range imaging three-dimensional cameras produce a digital photograph or video like output, but each pixel in the image contains the distance to the objects in the scene as well as traditional intensity information. This additional depth information is useful for machine vision applications because it allows the computer to perceive the world in three-dimensions, something we humans take for granted. Image processing algorithms, such as object segmentation, become significantly simpler with the additional depth information.

Time-of-flight range imaging is one approach that can acquire the depth information directly, with very little processing, from a single viewpoint, offering many advantages over triangulation or stereoscopic approaches. However, the technology is still relatively new, and these types of cameras still have a number of limitations. In this talk, time-of-flight range imaging systems will be introduced, and their principle of operation explained. The nature of the current limitations will be discussed, along with the latest techniques being developed to overcome these limitations in the "Chronoptics" Range Imaging Group in the School of Engineering.

 

Experience with practical parallel computing: from kilometre to nanometre

Sam Jansen
The University of Waikato, Hamilton
Tuesday, 4th May, 2010
A discussion of parallel computing from a programmers perspective, focussing on practical topics encountered by the speaker in academia, corporate, and business scenarios. Thoughts and experiences from large scale clusters to microprocessor architectures.

 

Ten billion piece jigsaw puzzles

John Cleary
Chief Scientist, Netvalue Ltd, NZ
Tuesday, 27th April, 2010
Recently the cost of doing the chemistry to extract a complete genome has plummeted. However, the cost of the computer processing required has come down only a little so that it is now the dominant cost. This talk will explore what can be done with complete genomes of people and their bacteria. Then it will describe the computer problems that arise when extracting genomes and related information. The talk will mostly be accessible to those without expertise in genomics or high performance computing.

 

What makes a great game developer?

Stu Sharpe
Lead Programmer, Sidhe, NZ
Wednesday, 21st April, 2010
Modern video games are created by teams of people from multiple disciplines, for multiple markets, across a range of devices. All this makes video game development one of the most challenging fields of our time, but for the people who bring us these games from idea to product it is one of the most rewarding. This talk is aimed at those who are interested in a career in game development, and will cover both the technical and personal skills required to succeed in this highly competitive industry.

 

Using computer science—and communities—to change the world

Robert O’Callahan
MoCo, Auckland, NZ
Wednesday, 21st April, 2010
Computer science is powerful because it can change the lives of millions of people at negligible marginal cost. Furthermore, free software enables people to form communities and work together to produce those changes. The Mozilla community aims to make (and keep) the Internet a "level playing field" and improve everyone's online experience, primarily by developing the free-software Web browser Firefox. I'll talk about our ongoing work on Firefox and how we have used Firefox to change the landscape of the Internet for the better. I'll discuss our current work, especially projects that I'm involved with such as improved typography, graphics and GPU-accelerated rendering for Web pages. I'll talk about our plans to deal with threats to Web freedom such as patent-encumbered video formats – Firefox's video support is developed in our Auckland office. I will explain how people with all kinds of talents can contribute to free-software projects, and I'll encourage all such people to get involved and make a difference!

 

Trees, enzymes, computers : challenges in bioinformatics

Greg Butler
Computer Science and Engineering, Concordia University, Montreal
Tuesday, 23th March, 2010
The focus of our research is on sustainability through the replacement of chemical processes (often using petrochemicals) with biological processes using enzymes. We search the genomes of fungi for enzymes with potential for industrial applications utlizing reusable non-food biomass such as trees, straws, and grasses. So enzymes that decompose ligno-cellulose, the building blocks of plant cell walls, are important.

Bioinformatics is the use of computers to manage, analyze, and mine data to assist bench scientists to prioritize their work, and to translate data into knowledge. This talk will highlight some of the gaps between what we would like to do, and what we currently do. In particular, I will emphasis some roles of tree and graph data structures in bioinformatics algorithms particularly those for annotation of enzyme function using phylogenomics.

 

Effective persuasion techniques for digital interactive advertising

Amrita Sahay
Department of Computer Science, The University of Waikato
Thursday, 18th March, 2010
The seminar will be a presentation of Amrita's honours work, in which she sought to compare direct vs indirect marketing in an interactive context. Her work will be available to view in the SCMS foyer this week (dissertation and exhibit showing the testing setup)

 

What do tourists really want?

Jason Pascoe
University of Minho, Portugal
Thursday, 25th February, 2010
As part of his investigation into lack of widespread adoption of electronic tourist guides, Jason is seeking a better understanding of the needs of the tourist who may potentially use them, and to see if there is perhaps a mismatch between what user's want and what has so far been delivered. Jason would like to discuss his initial ideas on the methods to achieve this understanding of tourists, and to get some feedback, new ideas, questions, etc.

 

On the puzzle of the induction/deduction loop: bridging perception and symbolic reasoning

Marco Gori
University of Siena, Italy
Thursday, 18th February, 2010
In this talk, I propose a variational framework to understand the emergence of intelligence in agents exposed to examples and knowledge granules. The formalism that I adopt turns out to be an extension of what Poggio & Girosi proposed 20 years ago on regularization networks and that, subsequently, gave rise to kernel machines. I propose the adoption of functional constraints as an abstract representation of knowledge granules and prove that when using the Lagrangian theory, we end up in a representation theorem that dictates the 'body' of the agent. Unfortunately, any direct computational scheme emerging from the theorem is trapped into the joint need to fulfil the given functional constraints and take the supervised and unsupervised examples into account. However, for any chosen degree of approximation, I show that a cyclic scheme that alternates supervised learning and constraint satisfaction steps converges logarithmically in the reciproc of the degree. This is somehow related to the typical induction/deduction loop taking place in humans and to the emergence of stages in children, while the initialization can be based on 'learning from examples only.' Interestingly, this connection with children development enlights principles that are beyond biology and that are instead rooted in complexity issues emerging from the proposed theory, that also offers an appropriate framework for the evolution of kernel machines.

 

Artist books are not books about art

Martha Carothers
University of Delaware, Newark, Delaware, USA
Friday, 12th February, 2010
Artist books are text and images conveyed in book form. When the verbal and visual content are considered together in the book form, the resulting artist book is much more than the sum of it's parts or a simple container for information.

 

Towards emotional sensitivity in human-computer interaction

Elisabeth André
Augsburg University, Germany
Thursday, 4th February, 2010
Human conversational partners usually try to interpret the speaker’s or listener’s affective cues and respond to them accordingly. Recently, the modelling and simulation of such behaviours has been recognized as an essential factor for successful man-machine communication. There is empirical evidence that many problems in man-machine communication could be avoided if the machine was more sensitive towards the user's feelings. The availability of robust multi-channel recognition methods is an important prerequisite for the development of affect-aware interfaces. In my talk, I will report on challenges that arise when moving from the recognition of acted emotions to the recognition of spontaneous emotions which are of higher relevance to man-machine communication. I will present a framework for smart sensor integration that is especially suited for real-time multichannel emotion recognition. Our approach to emotion recognition will be illustrated by affective interfaces that have been developed within the European Networks of Excellence Humaine (Human-Machine Interaction Network on Emotion Research) and IRIS (Integrating Research in Interactive Storytelling) and the European projects e- Circus (Education Through Characters With Emotional-Intelligence And Roleplaying Capabilities That Understand Social Interaction), Callas (Conveying Affectiveness in Leading-Edge Living Adaptive Systems) and Metabo (Personal Health Systems for Monitoring and Point-of-Care Diagnostics – Personalised Monitoring).

 

Finding things you can’t read: Interactive cross-language search for monolingual users

Douglas Oard
University of Maryland, USA
Tuesday, 19th January, 2010
Speech recognition and machine translation techniques are evolving rapidly, creating new opportunities to build systems that can support information seeking in large collections of multilingual and multimedia content. Little is presently known, however, about how people would use such systems to accomplish real tasks. In such circumstances, designers naturally rely on their own judgment to decide how component capabilities should be optimized and how those components should be integrated. Once that's been done, the next step is to put the resulting system in the hands of users in order to learn what they do with it. In this talk, I will describe what we have learned so far from such a process. I’ll start with some background on user-centered evaluation for cross-language information retrieval at the Cross Language Evaluation Forum (CLEF). I will then introduce Rosetta, an integrated system that supports search and display of live and archived news feeds in four languages for users who know only English and I'll explain how we have used a formative evaluation process to co-evolve both the design of the system and of the ways in which it can be used. I’ll conclude the talk with a few thoughts on how some of these capabilities might be deployed in digital library systems.

 

Events Index