Jump to content

Dialogue system: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m linking
adding links to references using Google Scholar
Line 5: Line 5:


== Background ==
== Background ==
After dialogue systems based only on written text processing starting from the early Sixties<ref>McTear, Michael, Zoraida Callejas, and David Griol, ''The conversational interface: Talking to smart devices'', Springer, 2016.</ref>, the first ''speaking'' dialogue system was issued by the [[DARPA]] Project in the USA in 1977<ref>Giancarlo Pirani (ed), ''Advanced algorithms and architectures for speech understanding'', Vol. 1. Springer Science & Business Media, 2013.</ref>. After the end of this 5-year project, some European projects issued the first dialogue system able to speak many languages (also French, German and Italian)<ref>Alberto Ciaramella, ''A prototype performance evaluation report'', Sundial workpackage 8000 (1993).</ref>. Those first systems were used in the telecom industry to provide phone various services in specific domains, e.g. automated agenda and train tables service.
After dialogue systems based only on written text processing starting from the early Sixties<ref>McTear, Michael, Zoraida Callejas, and David Griol, ''[https://books.google.com/books?id=X_w0DAAAQBAJ&printsec=frontcover#v=onepage&q=%22dialogue%20system%22&f=false The conversational interface: Talking to smart devices]'', Springer, 2016.</ref>, the first ''speaking'' dialogue system was issued by the [[DARPA]] Project in the USA in 1977<ref>Giancarlo Pirani (ed), ''[https://books.google.com/books?id=BEWqCAAAQBAJ&printsec=frontcover#v=snippet&q=%22dialogue%20system%22&f=false Advanced algorithms and architectures for speech understanding]'', Vol. 1. Springer Science & Business Media, 2013.</ref>. After the end of this 5-year project, some European projects issued the first dialogue system able to speak many languages (also French, German and Italian)<ref>Alberto Ciaramella, ''A prototype performance evaluation report'', Sundial workpackage 8000 (1993).</ref>. Those first systems were used in the telecom industry to provide phone various services in specific domains, e.g. automated agenda and train tables service.


==Components==
==Components==
There are many different architectures for dialogue systems. What sets of components are included in a dialogue system, and how those components divide up responsibilities differs from system to system. Principal to any dialogue system is the [[dialog manager]], which is a component that manages the state of the dialog, and dialog strategy. A typical activity cycle in a dialogue system contains the following phases:<ref>Jurafsky & Martin (2009), Speech and language processing. Pearson International Edition, {{ISBN|978-0-13-504196-3}}, Chapter 24</ref>
There are many different architectures for dialogue systems. What sets of components are included in a dialogue system, and how those components divide up responsibilities differs from system to system. Principal to any dialogue system is the [[dialog manager]], which is a component that manages the state of the dialog, and dialog strategy. A typical activity cycle in a dialogue system contains the following phases:<ref>[[Daniel Jurafsky|Jurafsky]] & Martin (2009), Speech and language processing. Pearson International Edition, {{ISBN|978-0-13-504196-3}}, Chapter 24</ref>
# The user speaks, and the input is converted to plain text by the system's '''input recognizer/decoder''', which may include:
# The user speaks, and the input is converted to plain text by the system's '''input recognizer/decoder''', which may include:
#* [[automatic speech recognizer]] (ASR)
#* [[automatic speech recognizer]] (ASR)

Revision as of 18:47, 23 February 2019

An automated online assistant on a website - an example where dialogue systems are major components

A dialogue system, or conversational agent (CA), is a computer system intended to converse with a human with a coherent structure. Dialogue systems have employed text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.

What does and does not constitute a dialogue system may be debatable. The typical GUI wizard does engage in some sort of dialog, but it includes very few of the common dialogue system components, and dialog state is trivial.

Background

After dialogue systems based only on written text processing starting from the early Sixties[1], the first speaking dialogue system was issued by the DARPA Project in the USA in 1977[2]. After the end of this 5-year project, some European projects issued the first dialogue system able to speak many languages (also French, German and Italian)[3]. Those first systems were used in the telecom industry to provide phone various services in specific domains, e.g. automated agenda and train tables service.

Components

There are many different architectures for dialogue systems. What sets of components are included in a dialogue system, and how those components divide up responsibilities differs from system to system. Principal to any dialogue system is the dialog manager, which is a component that manages the state of the dialog, and dialog strategy. A typical activity cycle in a dialogue system contains the following phases:[4]

  1. The user speaks, and the input is converted to plain text by the system's input recognizer/decoder, which may include:
  2. The text is analyzed by a Natural language understanding unit (NLU), which may include:
  3. The semantic information is analyzed by the dialog manager, that keeps the history and state of the dialog and manages the general flow of the conversation.
  4. Usually, the dialog manager contacts one or more task managers, that have knowledge of the specific task domain.
  5. The dialog manager produces output using an output generator, which may include:
  6. Finally, the output is rendered using an output renderer, which may include:

Dialogue systems that are based on a text-only interface (e.g. text-based chat) contain only stages 2–5.

Types of systems

Dialogue systems fall into the following categories, which are listed here along a few dimensions. Many of the categories overlap and the distinctions may not be well established.

Natural dialogue systems

"A Natural Dialogue System is a form of dialogue system that tries to improve usability and user satisfaction by imitating human behaviour" [5] (Berg, 2014). It addresses the features of a human-to-human dialog (e.g. sub dialogues and topic changes) and aims to integrate them into dialogue systems for human-machine interaction. Often, (spoken) dialogue systems require the user to adapt to the system because the system is only able to understand a very limited vocabulary, is not able to react on topic changes, and does not allow the user to influence the dialogue flow. Mixed-initiative is a way to enable the user to have an active part in the dialogue instead of only answering questions. However, the mere existence of mixed-initiative is not sufficient to be classified as natural dialogue system. Other important aspects include:[5]

  • Adaptivity of the system
  • Support of implicit confirmation
  • Usage of verification questions
  • Possibilities to correct information that have already been given
  • Over-informativeness (give more information than has been asked for)
  • Support negations
  • Understand references by analyzing discourse and anaphora
  • Natural language generation to prevent monotonous and recurring prompts
  • Adaptive and situation-aware formulation
  • Social behavior (greetings, same level of formality as the user, politeness)
  • Quality of speech recognition and synthesis

Although most of these aspects are issues of many different research projects, there is a lack of tools that support the development of dialogue systems addressing these topics.[6] Apart from VoiceXML that focusses on interactive voice response systems and is the basis for many spoken dialogue systems in industry (customer support applications) and AIML that is famous for the A.L.I.C.E. chatbot, none of these integrate linguistic features like dialog acts or language generation. Therefore, NADIA (a research prototype) gives an idea how to fill that gap and combines some of the aforementioned aspects like natural language generation, adaptive formulation and sub dialogues.

Applications

Dialogue systems can support a broad range of applications in business enterprises, education, government, healthcare, and entertainment.[7] For example:

  • Responding to customers' questions about products and services via a company's website or intranet portal
  • Customer service agent knowledge base: Allows agents to type in a customer's question and guide them with a response
  • Guided selling: Facilitating transactions by providing answers and guidance in the sales process, particularly for complex products being sold to novice customers
  • Help desk: Responding to internal employee questions, e.g., responding to HR questions
  • Website navigation: Guiding customers to relevant portions of complex websites—a Website concierge
  • Technical support: Responding to technical problems, such as diagnosing a problem with a product or device
  • Personalized service: Conversational agents can leverage internal and external databases to personalize interactions, such as answering questions about account balances, providing portfolio information, delivering frequent flier or membership information, for example
  • Training or education: They can provide problem-solving advice while the user learns
  • Simple dialogue systems are widely used to decrease human workload in call centres. In this and other industrial telephony applications, the functionality provided by dialogue systems is known as interactive voice response or IVR.

In some cases, conversational agents can interact with users using artificial characters. These agents are then referred to as embodied agents.

Toolkits and architectures

A survey of current frameworks, languages and technologies for defining dialogue systems.

Name & Links System Type Description Affiliation[s] Environment[s] Comments
AIML Chatterbot language XML dialect for creating natural language software agents Richard Wallace, Pandorabots, Inc.
ChatScript Chatterbot language Language/Engine for creating natural language software agents Bruce Wilcox
CSLU Toolkit
a state-based speech interface prototyping environment OGI School of Science and Engineering
M. McTear
Ron Cole
publications are from 1999.
NLUI Server Domain-independent toolkit complete multilingual framework for building natural language user interface systems LinguaSys out-of-box support of mixed-initiative dialogs
Olympus complete framework for implementing spoken dialogue systems Carnegie Mellon University [1]
Nextnova Multimodal Platform Platform for developing multimodal software applications. Based on State Chart XML (SCXML) Ponvia Technology, Inc.
VXML
Voice XML
Spoken dialog multimodal dialog markup language developed initially by AT&T then administered by an industry consortium and finally a W3C specification Example primarily for telephony.
SALT markup language multimodal dialog markup language Microsoft "has not reached the level of maturity of VoiceXML in the standards process".
Quack.com - QXML Development Environment company bought by AOL
OpenDial Domain-independent toolkit hybrid symbolic/statistical framework for spoken dialogue systems, implemented in Java University of Oslo
NADIA dialog engine and dialog modeling Creating natural dialogs / dialogue systems. Supports dialogue acts, mixed initiative, NLG. Implemented in Java. Markus M. Berg create XML-based dialog files, no need to specify grammars, publications are from 2014

See also

References

  1. ^ McTear, Michael, Zoraida Callejas, and David Griol, The conversational interface: Talking to smart devices, Springer, 2016.
  2. ^ Giancarlo Pirani (ed), Advanced algorithms and architectures for speech understanding, Vol. 1. Springer Science & Business Media, 2013.
  3. ^ Alberto Ciaramella, A prototype performance evaluation report, Sundial workpackage 8000 (1993).
  4. ^ Jurafsky & Martin (2009), Speech and language processing. Pearson International Edition, ISBN 978-0-13-504196-3, Chapter 24
  5. ^ a b Berg, Markus M. (2014), Modelling of Natural Dialogues in the Context of Speech-based Information and Control Systems, Akademische Verlagsgesellschaft AKA, ISBN 978-3-89838-508-4
  6. ^ Berg, Markus M. (2015), "NADIA: A Simplified Approach Towards the Development of Natural Dialogue Systems", Natural Language Processing and Information Systems, Lecture Notes in Computer Science, vol. 9103, pp. 144–150, doi:10.1007/978-3-319-19581-0_12, ISBN 978-3-319-19580-3
  7. ^ Lester, J.; Branting, K.; Mott, B. (2004), "Conversational Agents" (PDF), The Practical Handbook of Internet Computing, Chapman & Hall

Further reading