SoSe22: Constraint-based Semantics 2

From Lexical Resource Semantics
Jump to navigation Jump to search

LRS as assumed in the course

Conventions for the LRS-specific features

DR: Discourse referent

EXTERNAL-CONTENT (EXC, EXCONT, EX-CONT)

INTERNAL-CONTENT (INC, INCONT, IN-CONT)

PARTS

LRS Principles

For a list of ``official LRS principles´´see the appendix of the textbook: https://www.lexical-resource-semantics.de/wiki/index.php/Appendix_LRS_Principles


Content Principle

In any headed phrase,
the DR value of the mother and the head daughter are identical.

LRS Projection Principle


(final version going back to Penn and Richter (2004)):

In every headed phrase,

  1. The EXTERNAL-CONTENT value of the mother and the head daughter are identical.
  2. The INTERNAL-CONTENT value of the mother and the head daughter are identical.
  3. The PARTS list of a phrase is the concatenation of the PARTS lists of its daughters.


Semantics Principle

In every headed phrase,

  1. If the nonhead is a determiner with an INCONT of the form Qx(φ:ψ), then the INCONT of the head is a component of φ and the head and the nonhead have identical EXCONT values.
  2. For each nonhead that is a quantified NP with an EXCONT value of the form Qx(φ:ψ), the INCONT of the head is a component of ψ.

External Content Principle

  1. In every phrase, the EXTERNAL-CONTENT value of a non head daughter is an element of its PARTS list.
  2. In every utterance, every subexpression of the EXTERNAL-CONTENT value of the utterance is an element of its PARTS list, and every element of the utterance's PARTS list is a subexpression of its EXTERNAL-CONTENT value.

Meeting 02: Introduction

Note: The meeting takes place asynchronically!

Please watch the video for this meeting:


Definition of a model

The following material is an adapted form of material created by student participants of the project e-Learning Resources for Semantics (e-LRS). Involved participants: Lisa, Marthe, Elisabeth, Isabelle.

Watch a short podcast what first-order models look like.

Based on this podcast, we can define a model as follows:

  • Universe: U = {LittleRedRidingHood, Grandmother, Wolf}
  • Properties:
    red-hood = { < x> | x wears a read hood } = { <LittleRedRidingHood> }
    female = { <x> | x is female } = { <LittleRedRidingHood>, <Grandmother> }
    big-mouth = { <x> | x has a big mouth } = { <Wolf> }
    live-in-forest = { < x> | x lives in the forest } = { <Grandmother>, <Wolf>}
  • Relations:
    grand-child-of = { <x,y> | x is y 's grandchild } = { <LittleRedRidingHood,Grandmother > }
    afternoon-snack-of = { <x,y> | x is y 's afternoon snack } = { <LittleRedRidingHood,Wolf > }


Computation of the truth value of atomic formulae

The following video presents the step-by-step computation of the truth value of two atomic formulae. The example uses a model based on Shakespeare's play Macbeth. The two formulae are:

  • kill(macbeth,duncan)
  • kill(lady-macbeth,macbet)


Computation of the truth value of complex formulae

The following video presents the step-by-step computation of the truth value of two formulae with connectives. The example uses a model based on Shakespeare's play Macbeth. The two formulae are:

  • ¬ king(lady-macbeth)
  • king(duncan) ∨ king(lady-macbeth)

The next video shows how the truth value of a more complex formula can be computed. The example contains two connectives:

kill(malcom,lady-macbeth) ∨ ¬thane(macbeth)

The video shows two different methods: top down and bottom up.

Quantifiers

Video introducing determiners into our logical language. (The video is based on the scenario of Romeo and Juliett.)

Meeting 01

(no meeting)