of the Evaluators’ Forum

April 21st-24th, 1991

Les Rasses, Vaud, Switzerland

Edited by

Kirsten Falkedal

[ISSCO, Université de Genève, CH-1227 Carouge/Geneva, Switzerland]



Theme 1:  Aspects of overall acceptability of an MT system

A First-Pass Approach for Evaluating Machine Translation Systems

           Pamela Jordan    1

Evaluation of Machine Translation Systems from a Use’s Viewpoint – Some Critical Comments

            Ursula Bernhard     25

The Role of Lexicons

            Susanne Schlenker     29


Theme 2:  Reports on some actual evaluations

Evaluating Commercial MT Systems

            Elliott Macklovitch     37

Evaluation of MT Systems at Union Bank of Switzerland

            Doris Albisser    51


Theme 3:  Types of test material

User-Oriented MT Evaluation and Text Typology

            R. Lee Humphreys     55

A Management Tool for Test Corpora

            Gerardo Arrarte, Teófilo Redondo, Miguel Sobejano, Isabel Zapata      65


Theme 4:   Intelligibility, fidelity and other aspects of adequacy of translations

Quality Criteria for MT

             Lorna Balkan     73

Declarative Evaluation of an MT System: Practical Experiences

             Lorna Balkan, Matthias Jäschke, Lee Humphreys, Siety Meijer & Andy Way    85

Constructive Machine Translation Evaluation

             Stephen Minnis     99

Recipes for Escaping from the Partial Ordering of Candidate Translations: Some Consequences for the Evaluation of MT Systems

             Jan Dings     117


Theme 5: Approaches to error analysis

A Procedure for the Evaluation and Improvement of an MT System by the End- User

            Brigitte Roudaud    129

Evaluation of Machine Translation Systems: A System Developer’s Viewpoint

            Harri Jäppinen & L. Kulikov    143

The Respective Roles of the “Industrial Evaluators” and the System Developers in the Evaluation of MT Systems

            Sylvie Régnier     157

Comparative MT Performance Evaluation: an empirical study

           Adriane Rinsche      169


Theme 6: Test suites and standard test approaches

Measuring Compositionality of Transfer

          Björn Gambäck, Manny Rayner, Hiyan Alshawi & David Carter    181

Automatic Evaluation of Output Quality for Machine Translation Systems
Yu Shiwen    185

Some Practical Experience with the Use of Test Suites for the Evaluation of SYSTRAN

           Ulrich Heid & Elke Hildenbrand    195

Automatic Evaluation of Translation Quality:  Outline of Methodology and Report on Pilot Experiment

            Henry S. Thompson     215


Theme 7:  Test suites and standard test approaches, continued

Test Suites as a Means for the Evaluation of Machine Translation Systems
Yves Lepage    225

Developer- Oriented Evaluation of MT Systems

             Andrew Way    237

Evaluation of MT Systems - A Programmatic View

             Steven Krauwer    245



[Conference report in MT News International  no. 1, January 1991]