Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected @type scope behavior (term definitions persist throughout JSON tree) #174

Closed
dlongley opened this issue May 8, 2019 · 39 comments · Fixed by #195 or #200
Closed

Unexpected @type scope behavior (term definitions persist throughout JSON tree) #174

dlongley opened this issue May 8, 2019 · 39 comments · Fixed by #195 or #200

Comments

@dlongley
Copy link
Contributor

dlongley commented May 8, 2019

I was trying to use type-scoped contexts to define a @context and was surprised to discover that any type-scoped terms that get defined in the active context continue to be defined beyond the object with the matching @type. I think this is very unexpected behavior from an OOP modeling perspective. Also, it is very problematic for @protected terms, as it means that you can't model objects of one type that contain objects of another type when there is a commonly used JSON key (that may or may not have the same term definition) when terms are protected.

A playground example:

{
  "@context": {
    "@version": 1.1,
    "@vocab": "ex:",
    "@protected": true,
    "Library": {
      "@context": {
        "book": "library:book",
        "name": "library:name"
      }
    },
    "Person": {
      "@context": {
        "name": "person:name"
      }
    }
  },
  "@id": "the:library",
  "@type": "Library",
  "book": {
    "@id": "the:book",
    "about": {
      "@id": "the:person",
      "@type": "Person",
      "name": "Oliver Twist",
      "book": "unexpectedly defined as library:book!"
    }
  }
}

Produces these quads:

<the:book> <ex:about> <the:person> .
<the:library> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <ex:Library> .
<the:library> <library:book> <the:book> .
<the:person> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <ex:Person> .
<the:person> <library:book> "unexpectedly defined as library:book!" .
<the:person> <person:name> "Oliver Twist" .

http://tinyurl.com/y2x4szzb

If you use @protected here, you get an error (which I also find unexpected):

{
  "@context": {
    "@version": 1.1,
    "@vocab": "ex:",
    "@protected": true,
    "Library": {
      "@context": {
        "@protected": true,
        "book": "library:book",
        "name": "library:name"
      }
    },
    "Person": {
      "@context": {
        "@protected": true,
        "name": "person:name"
      }
    }
  },
  "@id": "the:library",
  "@type": "Library",
  "book": {
    "@id": "the:book",
    "about": {
      "@id": "the:person",
      "@type": "Person",
      "name": "Oliver Twist",
      "book": "unexpectedly defined as library:book!"
    }
  }
}

That error happens even if name is defined the same way for both types.

I suspect that type-scoped terms behave this way because it was easy to implement, but I think it is very surprising behavior that may not have been exposed yet due to limited examples.

It's possible that there's an easy fix for this. I think we should change this behavior so that we track whether a term definition in the active context was defined via a type-scoped context and whether or not it replaced a non-type-scoped term when it did so. Then, whenever traversing into one of the typed object's properties during processing, we revert all type-scoped terms to their previous definitions which may mean setting them to null (clearing them) if they were previously undefined. Then processing can continue as normal.

@dlongley
Copy link
Contributor Author

dlongley commented May 8, 2019

Also, if this fix works, I think the JSON-LD syntax spec should clarify that the changes to the active context that bring in type-scoped terms only apply for terms used on the object with the matching type.

@dlongley
Copy link
Contributor Author

dlongley commented May 8, 2019

I've implemented the fix in a PR to jsonld.js here: digitalbazaar/jsonld.js#312 and fixed tests and added two more in PR w3c/json-ld-api#89.

Note that if you define a @type scoped context that has property terms with their own scoped contexts, those will still be properly applied to deeply nested nodes within a type. This fix will only ensure that terms that are defined for objects with specific @type values won't leak to other nodes that don't have those types.

@gkellogg
Copy link
Member

gkellogg commented May 8, 2019

Why is the expectation that type-scoped contexts are limited to the object containing the type different than for property-scoped contexts being limited to the object value of the property?

I haven't looked at your PR, but it would seem that the expansion algorithm needs to maintain two different contexts, that it received (with possible update from property scoping), and those that come from @type. When an embedded context is encountered, it needs to update both the type-scoped copy and the passed in copy. This also needs to be reflected when handling nested properties.

It does dilute the message that property- and type-scoped contexts be have exactly as if they had appeared inline, as the type-scoped context would disappear when going deeper, while the property-scoped and any directly scoped contexts persist.

Also, what happens when a type-scoped contexts defines a term with an scoped context which is then used? As the algorithm is defined, the expansion algorithm won't see that scoped context, as it's not defined in the current context.

@dlongley
Copy link
Contributor Author

dlongley commented May 8, 2019

@gkellogg,

Why is the expectation that type-scoped contexts are limited to the object containing the type different than for property-scoped contexts being limited to the object value of the property?

I expect the primary audience for @type-scoped properties to be people that are using OOP modeling. This means defining a type and the properties you expect to see on that type. The "scope" is the object with a matching @type. If you move beyond that scope (into another object of another @type), it's quite unusual for the terms to be defined. This becomes even more obvious as you move into some deeply nested structure that has a variety of other typed objects along the way.

The primary audience for property-term scoped properties is one that is defining properties for different sections of their JSON tree. If you traverse into branch X of the document, then terms A, B, and C will be defined. This is also intuitive for the audience. I think having to redefine them when you're on the same branch (though you've gone deeper into it) would be quite unexpected. This is different from the @type situation because you change the @type scope when move deeper into a JSON branch (because @type itself doesn't persist), whereas the branch does persist, you're just further along the branch.

I haven't looked at your PR, but it would seem that the expansion algorithm needs to maintain two different contexts, that it received (with possible update from property scoping), and those that come from @type. When an embedded context is encountered, it needs to update both the type-scoped copy and the passed in copy. This also needs to be reflected when handling nested properties.

You don't need to maintain two different contexts, you create a new active context (a clone that removes the @type-scoped terms) when you recurse into the typed object (when you follow its properties to other objects).

Also, what happens when a type-scoped contexts defines a term with an scoped context which is then used? As the algorithm is defined, the expansion algorithm won't see that scoped context, as it's not defined in the current context.

I have a test for this and it is seen. In that case, a term scoped context is created prior to recursing into the object (there is no change to the existing algorithm). Since it is a property-term-scoped context, it functions as expected (defining terms anywhere along the tree branch).

@dlongley
Copy link
Contributor Author

dlongley commented May 8, 2019

With the above changes, I was able to update the VC context to use type-scoped contexts with @protected terms:

https://raw.githubusercontent.com/dlongley/vc-data-model/flatten-context/contexts/credentials/v1

@gkellogg
Copy link
Member

gkellogg commented May 8, 2019

Okay, that looks like a good approach. I'll work on my own implementation.

@iherman
Copy link
Member

iherman commented May 18, 2019

This issue was discussed in a meeting.

  • RESOLVED: Type scoped contexts will be shallow and not be inherited via properties of instances of the type, and we will add a syntactic sugar for a wildcard match on properties on the type to define their context
View the transcript 3.1. Type-scoped contexts: #174
Rob Sanderson: dlongley the first one of timeliness for you is type scoped contexts
… would you like to summarize?
Dave Longley: I went to use type scoped contexts to create the Verifiable Credentials context
… but immediately hit issues
… these actually bleed beyond being scoped to a particular type
… I fixed our implementation…and gkellogg fixed his implementation
… this issue is about fixing the text to match the implementations
Rob Sanderson: so what you’ve described definitely sounds like a bug
… it shouldn’t bleed outside of that type
… so, because name is defined in Person, it’s scoped to Person
… but you can also have name in Library and have that be scoped to Library
Dave Longley: right, but if you combine them, then you get a clash of terms
… if you put protected on these things in the next example
… you would be told that there were issues…when in fact there aren’t
… this is more like a bug that is revealed by protected
… so if you had also used some other terms that you intended to be dropped, those would get picked up by Library terms
Rob Sanderson: right. without Person being in the hierarchy
Dave Longley: even if it were there
… if you had them nested, and had terms you wanted dropped, if that same term is defined earlier, it remains defined
… so the Library terms don’t stay within Library, they bleed out
Rob Sanderson: well…that sounds like what I’d expect, actually
Dave Longley: so, for property scoped contexts
… but for the type scoped contexts, it’s more like object oriented expectations
… so you don’t want unrelated contexts showing up in unexpected places
… property scoped terms are different
… those stick around
… the property scoping works as expected
… but the type scoping shouldn’t behave like a property scoped context
Rob Sanderson: so the situation we have in IIIF.io is…
… we use ActivityStreams properties
… which are very very broad…like items
… it’s the same JSON key and RDF property
… it’s just, “here’s the things in this list”
… but at various places in the tree, the items are of various types
… so when you get to an Annotation, you then need to use type scoping to deal with that change
… so we’d need it to re-re-redefine its terms to deal with this change
Dave Longley: you can put a property scoped context inside a typed scoped space
Rob Sanderson: what happens when those continue into an @container: @list scenario
… so, for example, you can have collections, which either contains collections or things
… so a tree, or leaf nodes
… so trees get one context, and leafs get another
… how would that not collide
… I assume you’d use type scoped?
Dave Longley: it’s OK to use type scopes
… your concern is that using type scopes don’t travel down the branch
Rob Sanderson: yes. specifically when there are 1.0 contexts
Dave Longley: when you use type scopes, you can choose which path is traveled down
… you can say in TypeA use this context, and then within these properties use that other context
Rob Sanderson: 1.1 context for IIIF - https://preview.iiif.io/api/image-prezi-rc2/api/presentation/3/context.json
Dave Longley: one of the goals is to deal with ActivityStreams scenarios specifically using protected for use with non-JSON-LD processors using ActivityPub
Rob Sanderson: in that example, there are typed contexts which should be overridden further down
… like below Annotation, there are further nodes
… what is the context there?
Dave Longley: if the only thing that’s bringing in the context is this type scoped context
… that context will get removed if you move on to something that is not scoped to the same thing
… like an ordered collection or something
… they’ll need to define their own
… or use the earlier one
Rob Sanderson: that’s not what 1.0 would say
Dave Longley: well, 1.0 didn’t have typed scoped contexts at all
Rob Sanderson: there doesn’t seem to be a solution for this particular case
Dave Longley: well, there is a solution, which is to say for those properties, you can
… if you want every single property in an AnnotationCollection, then you’d need to use that via typed scoped contexts throughout the collection
Rob Sanderson: but given that most of these contexts are just the properties
… you’re doing an awful lot of rewriting with that approach
Dave Longley: there is a solution
… which is unpleasant
… but without this bug fix, there’s no way to do the other approaches
… you literally cannot use type scoped contexts and protected terms together in the way you’d expect
Rob Sanderson: is there a way to flag which of the two?
… that seems like a lot of processing
… it would require context import metadata…
… like only this class + all that descend from it
Dave Longley: yeah, we do that with property scoped contexts
… but you want to do that without redefinition
Rob Sanderson: right. if the Annotation context got updated, we’d then have to synchronize this context with that one.
… so…maybe…if there were an Annotation 1.1 context
… could it then redefine things within itself?
Dave Longley: yes. that would work
Rob Sanderson: so for this particular case, that would be the right solution
Dave Longley: correct
Rob Sanderson: so, given that context files are not versioned in the same way that specifications are
… we could propose a 1.1 context
Dave Longley: this would also afford the ability to use the @Protected feature
… which would help JSON processors
… who don’t want to do JSON-LD processing
… and want terms to remain identical in both ecosystems
David Newbury: does this only protect at a certain level in the hierarchy?
… or does to descend into the tree?
Dave Longley: any term definition does not continue down the tree–if you change type scope
David Newbury: given your definition, that makes sense
… but that’s not what I expected
… I’d thought that this covered the terms in the hierarchy under that type
Dave Longley: my opinion of that is when you’re thinking of it from either OOP or graph perspectives
… you move into a node, and that node has a class with certain properties
Rob Sanderson: Another type scoped context: https://linked.art/ns/v1/linked-art.json
Dave Longley: so to go into a node is to move into a new space
… and if it cascaded as you describe it would be incompatible with @Protected
David Newbury: I think its two different contextual models of how this work
Rob Sanderson: yep. we just need to verify the models against the use cases
… so here’s another type scoped context
… right at the top, there’s some vanilla ones
… but the 4th class is Period
… we want to rename some of these to simpler names
… there’s no importing of anything
… so these would not descend down
Dave Longley: yep. that should work just fine
Rob Sanderson: anyone have other 1.1 contexts?
Simon Steyskal: I was also kind of confused
… given that we have multiple people confused by this–or having differing expectations…maybe we should put this in the primer or something?
… if we decided on the specific way to deal with this, then we should write-up the foundation of how this works
… because it’s not like a class hierarchy
… and we want to avoid those expectations causing confusion
Rob Sanderson: one of the reasons why I had the opposite expectations from dlongley
… is that the way gkellogg has expressed in the past
… was to avoid contexts inline in the JSON
… if you had a top-level node with a context, then that context would extend down
… whereas if you had @type with a scoped context, it would be equivalent to putting that context in at that part of the tree
… which would come with the expectation of cascading–as in the case of an inline context at that point in the tree
David Newbury: so, say someone else is using our vocabulary
… what would it require us to do to rewrite the linked-art context
… to be sure that our terms stay scoped into our context
… is it the same as what’s being considered for ActivityStreams?
Rob Sanderson: only if they’re type scoped
Dave Longley: you either define properties that can appear anywhere
… or those that can be used within certain types
… or those that can be used within any type
… so, yes the way gkellogg had stated this as pulling context into a type
… and thinking about it as pulling a context in “in place” as a context object would
… that, however, doesn’t work well with protected, etc.
… so, the change would be that type scoped would now map singularly to that scope
David Newbury: so, assuming that someone uses a type scoped definition
… when I put this type in, it’s a vocabulary concept
… so where we put our contexts have names and dates
… we wouldn’t want to implement them at every level
… we’d have to protect the linked-art context against being used as type scope context
Rob Sanderson: so, like IIIF includes Annotations
… so when you descend from Concept to name via identifiedBy property
… you’d no longer have an active context
… the context doesn’t have a notion of inheritance
… so it’s not really Object Oriented
… if it did, then you’d be fine
… so we instead would have to enumerate every possible property of every possible class
Dave Longley: given that I have no knowledge of what you’re talking about…
… from the high level it sounds correct
… if you want to scope against types, use types on everything
… if you want your types to survive under all the properties, then yes…this will be verbose
… there could be another simpler syntax
… but I don’t think the solution is to not handle this sort of inheritance
… if that is indeed a need to solve that verbose syntax, then we should solve that on its own and not give up this sort of inheritance
David Newbury: this sounds like shallow scoping vs. deep scoping
Dave Longley: to me its a difference between property scoping vs. type scoping
… when you use property scoping, you’re cascading into the tree
… but with type scoping, you have no idea where it’s going next
… this gets worse with protected
… because if you don’t know where a type usage is going next, there’s no way to correctly protect its terms
… so maybe we need like an @any property to handle this scoping with protecting
David Newbury: so, if you do want shallow scoped types, could you reimport the base context back via nullifying?
Dave Longley: nullifying can’t do that
David Newbury: but if you knew what it was, could you bring it back in?
Dave Longley: that would be dependent on how you got that base context
… which sounds very difficult
David Newbury: so, if we have deep scoping, then shallow scoping seems hard, and vice verse
Dave Longley: there might be a cleaner way to say, in this scoped context, you can use these type scopes
… to me one seems easier to solve than the other
… but it certainly seems like they do differ
David Newbury: I think there is to do them both…and maybe syntax sugar would make things simpler
Dave Longley: think where we’re at now, shallow is broken
… so if we fix for shallow, we make deep harder
… and that’s where this syntactic sugar might help
Rob Sanderson: what about nullification and circular re-importing?
… I fear you’d end up recursively recursing and cursing about recursion
… so the @properties: use-this-context is way less verbose than defining every single property
Dave Longley: yeah, it’s effectively a way of defining a base context
Rob Sanderson: right. @base-context (vs. @base)
… it was very good to discuss this
… anything else? or are we at proposal time
Dave Longley: so, we’d need to change this to default to shallow, so VCWG can use protected
… so the VCWG testing can move forward
Proposed resolution: Type scoped contexts will be shallow and not be inherited via properties of instances of the type, and we will add a syntactic sugar for a wildcard match on properties on the type to define their context (Rob Sanderson)
Dave Longley: +1
Benjamin Young: +1
Rob Sanderson: +1
David Newbury: +1
Simon Steyskal: +1
Rob Sanderson: so this solution seems better than the other options…which all seem worse
Jeff Mixter: +1
David I. Lehn: +1
Benjamin Young: ..and for the linked-art case, we simply have in the context generating script, “for each class, put in magic-ness to use self for all the properties of this thing”
Resolution #2: Type scoped contexts will be shallow and not be inherited via properties of instances of the type, and we will add a syntactic sugar for a wildcard match on properties on the type to define their context

@iherman
Copy link
Member

iherman commented May 31, 2019

This issue was discussed in a meeting.

  • RESOLVED: Un-defer #108 with propogation as the use case
View the transcript 3.1. Type scoped context continued; property wildcard
Rob Sanderson: link - #174
Rob Sanderson: what is the difference between type scope contexts and property scoped contexts. Is it scoped to the properties of that class, and other thought of it as a replacement for an inline context, and would then expand beyond that class.
… where we came to last week is that there are good use cases for both, but the only way to allow for both use cases is to have type scoped contexts be class-only, and to have a way to expand beyond them by setting a default context within.
… is that sufficiently detailed to explain where we are right now?
Gregg Kellogg: I didn’t quite understand until right now. I’m trying to think of the syntax
Dave Longley: my understanding is that what we’re looking for is to take this other context and define it within this scoped context, and then use it for all properties within that scoped ontext
… We want to be able to reuse existing contexts within a type-scoped context, so we don’t have to be verbose typing out all of those contexts again.
… syntactically, we can currently do this by re-writing all contexts within each of those properties, but that’s verbose.
Rob Sanderson: Example use case: https://preview.iiif.io/api/image-prezi-rc2/api/presentation/3/context.json type scopes in http://www.w3.org/ns/anno.jsonld for Annotation and AnnotationPage
Ivan Herman: so, if I want to have all schema properties valid within that type-scoped property, and to inherit, and do it by including the schema context file, not each property inline.
Rob Sanderson: an example: we’re using type scoping within annotations to pull in the annotation context, which is a 1.0 context, and since the decision is that the annotations referred to would no longer inherit, and so this would need to be modified with a new keyword to maintain this behavior instead of retyping each property for each context
Ivan Herman: so we want hasBody to remain an annotation?
Rob Sanderson: we want the resource that is pointed to by that property to be an annotation, even though that annotation context is only valid on that class
Gregg Kellogg: I understood that this could be for specific properties, but I thought of wildcard as applying to all properties
… for instance, if you’re traversing to FOAF, you might not want to continue to use schema.org properties
… syntax and wildcard: we could use full wildcarding or or something like a URI prefix
… but then what happens when they have contexts defined? I presume they’re honored as well
… how deeply have we thought about the various cases
… and would it be a property of the propriety term definition, or a propriety of the class that the class term definition that then defines those terms?
Rob Sanderson: we had not talked about globbing or real wildcarding: we’d talked about a shorthand for not retyping all properties within that context.
… you would then need to define all schema.org contexts for every class that below need to apply
… the question is at what level does the wildcard apply? Is it at the ontology level, or is it at the context level?
… we’d talked about it at the context level, which is consistent with how other things work
Gregg Kellogg: expanding treats properties as terms, not expanded URIs, and compacting we select terms by matching, not via URI. Enumerating properties by terms, not URIs, is more consistent with how we do things currently
Rob Sanderson: some solution that says, for all the terms within this context, treat them as property-scoped within this class
… like what dlongley put in the chat: for all properties, treat them as property-scoped contexts.
… which then wouldn’t need actual wildcarding, just matching
… which seems easier
David Newbury: I’m wondering if this doesn’t suggest that @type scoping itself could be clearer and provide the approach to inheritance that people are expecting here
Rob Sanderson: could we just have two keywords, one for each behavior?
Dave Longley: I don’t know if it’s exactly the same, because comparability differs here.
… when we pull them in, we treat them all as if they’re property-scoped terms, which is different than the behavior before.
Dave Longley: +1 to something along the lines of what gregg is saying
Gregg Kellogg: I think that if we have a property that can appear in a type-scoped context that says that all terms within that context inherit that context, or perhaps enumerated terms inherit, and in the absence, no terms inherit, and then it could not appear only on type-scoped contexts
Dave Longley: I think that we’re thinking that each one of these contexts would then consider the type scoping as if it were defined on all descending properties
Gregg Kellogg: and it would be recursive–this would then travel down the property chain
Dave Longley: yes
Gregg Kellogg: unless that property redefines its own scope
… that seems reasonable
Rob Sanderson: can we see a straw person example?
Gregg Kellogg: @inheritPropertyScopes: true
Gregg Kellogg: @inheritTypeScopes: [‘a’, ‘b’]
Gregg Kellogg: do those terms need to be defined within that scope, or do they just need to have been in scope at the time it’s interpreted?
Rob Sanderson: that would not work for our use case, since the properties of the annotation are not known higher-up the chain
Dave Longley: processing: do you see if it appears up higher to see….(lost the chain here)
Gregg Kellogg: I think your use case would be solved by using true
Rob Sanderson: correct.
Dave Longley: when defining a term within a type-scoped context, look for @inheritPropertyScopes
Dave Longley: and if that appears, add a property-scoped context to the term definition
Dave Longley: (unless one already appears, as that one would take precedence)
Gregg Kellogg: we should come up with a better name
Rob Sanderson: in our case, at the high level, our use case is…
Rob Sanderson: { 'Annotation': {"@id": "oa:Annotation", "@inheritPropertyScopes": true, "@context": "http:...anno.jsonld"}
Pierre-Antoine Champin: @propagates ?
David Newbury: `@descends` ?
Rob Sanderson: we can then just update the 1.1 context
Benjamin Young: This is pretty ugly, but I think we can make it prettier. Do we use that case anywhere, and you will really need to understand the plumbing to make this understandable.
Gregg Kellogg: @propagates +1
Benjamin Young: we’re really going to need a primer.
Dave Longley: @propagate: true|[terms] seems ok
Benjamin Young: the more we can reduce that cognitive pain…we need something other than reading the spec to explain how this works.
Rob Sanderson: there seems to be consensus around @propagate?
Proposed resolution: Create a new keyword, @propagate, for type scoped contexts which takes either a boolean (false is default) or an array of terms, which when present means that all or the listed terms propagate the context listed as the value of the keyword (Rob Sanderson)
Dave Longley: @propagate “propagates” the type-scoped context as a property-scoped context for all listed terms
Gregg Kellogg: we could consider context as an array, and the first item would be @propagate true. This is getting hacky…we’re pulling on a thread and we can’t stop pulling
… I’m less in favor of this than making it a property of the context itself.
… if it can’t work except this way…
… I think this changes the default…
… and if you want the next one to be false…
… how do you inherit the default again?
… these questions are why I’m not happy with these.
Rob Sanderson: This could be solved with metadata on the context, but we’ve deferred that conversation
Gregg Kellogg: how problematic is it to just refer to it in the context?
Rob Sanderson: it means that we can’t include 1.0 contexts, which is not great.
Gregg Kellogg: you can still refer to them…
Rob Sanderson: for type-scoped contexts, if you want to refer to a 1.0 context, if you want to type-scope them in, you’d need to rebuild those contexts when @Property is a property of the context, instead of the referring context
Ivan Herman: Red flag: we were wondering about feature freeze, and we are discussing something here that is not thought through yet, and it’s a long discussion, and it’s practically June
… I am worried here. Protected took two months, and we’re approaching the same place.
Rob Sanderson: the issue is that Verifiable Credentials have assumed one way, and the spec works the other way, so there needs to be a decision one way or the other
… hopefully a solution that works for both.
… we can stick with the spec
Gregg Kellogg: we can do type scope as committed, and without dealing with propagation, or we can remove the type-scoped property…
Rob Sanderson: but that chooses one use case over the other
… we need to deal with the competing use cases
… or revert back to the previous spec
Dave Longley: it doesn’t make the previous use case impossible, just verbose.
… the other way around was literally impossible
Rob Sanderson: consider schema.org, you’d need to enumerate all terms in schema on each property. It’s possible, but implausible.
… a property on the 1.1 context with propagation, and define a 1.1 context, and @propagates : true
David Newbury: does this mean that the writer of this context
… decides whether it propagates up or down?
… wouldn’t that mean the annotations group would need to define two different versions of that context?
Rob Sanderson: yes. that is indeed the case
… which also seems…not ideal
Gregg Kellogg: I think the way to handle this is to set @propagate changes the default to subsequent properties
… we could including contexts judicious…
Rob Sanderson: the ugly version of a list where there are processing flags and contexts within the context definition
… documentable, but not pretty
… and order dependent
David Newbury: do we have a sense of which of these inheritance models is more common?
… at this point it feels like we’ve built in the ability to turn this on or off
… or is that not correct?
Rob Sanderson: I don’t think that we know
… currently, all of the inheritance models are propagate. 1.0, everything does so.
… that implies that propagation is more common, but people coming from object-oriented might think otherwise
Pierre-Antoine Champin: I’m not convinced by this, but…I don’t think this has been considered.
… another keyword for non-propagating contexts?
… remove the flag, make it cleaner
Rob Sanderson: that does seem cleaner
Ruben Taelman: I like the idea, but that might make context even more complicated, but now have two ways to find a context
… is feasible, but complicated
Pierre-Antoine Champin: Just to be clear, I share that concern.
… two keywords for contexts ugly
Dave Longley: it could be a keyword on the type definition instead
David Newbury: .. and I wanted to point out that considering rob’s example, having @context always propagate, and a separate keyword for dlongley’s proposal
Gregg Kellogg: the other thing, considering contexts with metadata, where we had metadata, and that could solve this
… then we could set some of these properties…
Rob Sanderson: two routes: new keyword, context reference metadata
Benjamin Young: 1.0 propagates now, so the default is propagate true. Then what we need is the way to prevent that, and to say that this is exclusive
Rob Sanderson: I would be fine with that
Ivan Herman: here is the issue where this was discussed: #108 with a syntax possibility at: #108 (comment)
… there’s a syntax proposal there
Benjamin Young: I see it differently, type-scoped contexts didn’t exist in 1.0 and are a new concept … and scoping “type-scoped contexts” to types makes perfect sense.
Ivan Herman: nobody seemed happy at the time with metadata at the time…if this is the only one we define, it allows others…I would not propose integrity now
Dave Longley: +1 to providing a future hook
Proposed resolution: Un-defer #108 with propogation as the use case (Rob Sanderson)
Rob Sanderson: +1
David Newbury: +1
Gregg Kellogg: +1
Tim Cole: +1
Dave Longley: +1
Ruben Taelman: +1
Harold Solbrig: +1
Ivan Herman: +1
Adam Soroka: +1
Pierre-Antoine Champin: +1
Benjamin Young: +1 (with concerns about scope creep)
David I. Lehn: +1
Resolution #2: Un-defer #108 with propogation as the use case
Rob Sanderson: we should then look at 108 over the week and come up with a proposal for contexts
Gregg Kellogg: it might be good if this were done through more detailed proposals in advance
Rob Sanderson: so, everyone who’s not on a trip, please contribute to the issue
… and it is the top of the hour

@Descends
Copy link

Descends commented May 31, 2019 via email

@gkellogg
Copy link
Member

@Descends Sorry, you were likely tagged because of an @decends in the meeting minutes, which should have been escaped. It is a possibility to use as a keyword, which happens to be the same as your user name.

@Descends
Copy link

Descends commented May 31, 2019 via email

@gkellogg
Copy link
Member

API updated to fix this in w3c/json-ld-api#89.

gkellogg added a commit that referenced this issue Jun 19, 2019
…ects in which they're used. Add changes for type-scoped contexts.

Fixes #174.
@gkellogg gkellogg assigned gkellogg and unassigned pchampin Jun 19, 2019
gkellogg added a commit that referenced this issue Jun 20, 2019
…ects in which they're used. Add changes for type-scoped contexts.

Fixes #174.
@gkellogg
Copy link
Member

#195 was reviewed by @pchampin and @gkellogg. w3c/json-ld-api#89 by @dlongley and @gkellogg. Closing.

@azaroth42 azaroth42 reopened this Jun 21, 2019
@iherman
Copy link
Member

iherman commented Jun 21, 2019

This issue was discussed in a meeting.

  • ACTION: write up proposed syntax and functionality for <code>@src/@propagate</code> (Rob Sanderson)
View the transcript 3.1. Consider context by reference with metadata #108
Benjamin Young: #108
Ivan Herman: also: #174
Benjamin Young: This is about a more advanced context object that includes referencing other contexts with meta data for a whole host of issues. Most recent use case is around setting propagation.
… Rob you were the last to propose some things.
Rob Sanderson: At this point I think we need this particular pattern. Of the proposed colors for the bikesheds, @src seems to convey the appropriate semantics. It’s not necessarily a link/href, @context and @id would make for a lot of overloading that would maybe cause confusion.
@import isn’t too bad but pchampin indicated why it may not be ideal.
… It seems to me like a reasonable way forwards, assuming, it’s implementable and unambiguous.
Gregg Kellogg: I guess my concern about @src is … one is that we don’t typically use abbreviated keywords in JSON-LD, @source might solve that. The other thing is that my familiarity is similar to href like in HTML where it doesn’t provide for an inline option, if we wanted to allow for those there which would sort of make sense. @import seems a little more unambigious.
Pierre-Antoine Champin: Regarding what Gregg just said, I think you have a point, indeed. I wanted to ask about the use cases, I realized after making those proposals, we might not cover one of those. We reactivated this issue about the idea that parameters could be added to this context, to allow this context to propagate/not propagate.
Rob Sanderson: Reference: The propagate case - #108 (comment)
Pierre-Antoine Champin: For this use case we might want to do this with a URL reference or an embedded @context. Meta data is always about a referenced context not one that is directly embedded. Does that cover all our use cases.
Benjamin Young: Good flag to raise.
Ivan Herman: It’s very close to what I wanted to ask. I’m not completely sure what we’ll use it for apart from the fact that it looks nice.
… There are things like sealed and SRI that came up but we’re not talking about that anymore. What are the use cases we want to use it for?
… Btw, ‘src’ is in use for the image URL in HTML, very close to href.
Rob Sanderson: I wanted to ask the same question, do we really want to/need embedded contexts here or external sufficient? I don’t know why you would use an embedded one.
… If it’s only for reference, @src is ok, but what’s the use case for embedded.
Gregg Kellogg: I think we might want to constrain ourselves to a keyword that references an external resource about which we might assert some meta data. Rather than keeping it open about importing several things – which of them are we asserting things about, as well as an embedded case. There’s no use case for that, only some notion of uniformity to allow that.
… Now it starts looking overly generalized. If we need a way for a context to reference another one with the semantics that that context is imported into the referencing context that would also allow some room for asserting information about the referenced context, that is a narrow solution which addresses that use case.
Rob Sanderson: +1
Benjamin Young: Roughly like what Rob just posted in chat.
Gregg Kellogg: In which case @source is ok, want some consistency.
Rob Sanderson: Suggested syntax example: {"@context": [{"@src": "http://.../context.json", "@propagate": false}, ...] }
Ivan Herman: I must admit that I didn’t even consider having this embedded. Rob would you want to comment … a question I have is are we sure this is the only property? We don’t have any other meta data properties to define in 1.1 so far?
Rob Sanderson: I don’t think so, this is the only one we have so far @propagate.
Ivan Herman: I’d like to be sure this is the right solution.
Pierre-Antoine Champin: Just to be clear, I was not trying to generalize or over generalize, I was just pointing out that as Ivan and Rob pointed out … @propagate is what we unearthed this issue. When I think about it, it makes sense to use it on an embedded context. It started with scoped contexts, most of the time those are embedded.
… Is @source really a solution to the problem we were trying to solve?
… My personal answer would be “no it’s not” and maybe we reopened the wrong issue to solve that problem. This mechanism as we envisioned it is more about referenced contexts, not embedded ones.
Rob Sanderson: I think it is still the right thing to reopen. If it’s embedded we don’t really need this pattern at all.
… You could restructure your context to do things differently.
… When you want to reuse an external context, then you need to say whether or not the terms of that context propagate or not.
… If it’s embedded you just set it and you’re done.
… When you pull in another context, like a 1.0 from Annotations, it assumes something that isn’t intended and you need to change it.
… I think the case is an external context and it should not have the default propagation value.
Gregg Kellogg: If it’s within a type then type contexts don’t propagate.
Rob Sanderson: Right and this is to change this behavior so it does.
… Yes, it’s to fix the impedance mismatch between 1.0 and 1.1.
Ivan Herman: I am a little bit lost. What I would propose is that somebody comes up with text, possibly a PR that defines this syntax so that it’s clear what it is. Defines its usage with propagate and what that means. I’m a little bit lost. Having something specific written down would help.
Gregg Kellogg: So I think my confusion is that I recall the discussion about this … as wanting the ability to reference an external 1.0 context where we’d have to have @version specified within there. You can’t update the referenced context to do that.
… If that is one of the use cases .. the other use case is to override the propagation behavior of type-scoped behavior. Not sure how it does that cleanly. Not sure how this relates to the type-scoped propagatability without something more explicit.
Rob Sanderson: This is the issue that we discussed a couple of weeks ago now. Where, it’s the combination of the 1.0 and the type-scoped context where it really matters. Because 1.0 contexts are defined without the notion of type-scoped contexts or propagation then they’d never be written in such a way that it’s prevented because it’s not possible in 1.0.
… When type-scoped contexts gets prevented in 1.1 we need a way to override that for 1.0 and potentially for 1.1.
… That would be a useful side effect I think to be able to do that.
… The referenced context might be defined without any notion of type scoping at all.
… If you want to include it in a way that is compatible with the rest of your constructions which would be propagating or not – you’d want to make sure it was interpreted consistently.
Gregg Kellogg: I think there’s a bunch of use cases that need to be considered about what the effect is. Does this include the use of a @propagate keyword or not? In one example – an embedded context that references another one and that includes @propagate: false, is that keyword in play and if not, what are the behaviors?
… If you reference a 1.0 context does that change the behavior?
… I think we need test cases for what the expected behavior is.
Rob Sanderson: I’m happy to write up in the issue in 108 rather than in the propagating one … a proposed syntax and the proposed functionality.
Gregg Kellogg: I think part of that functionality is … if I have a context that defines things and it references things as a source, what is the order of processing. Presumably the point is to process @version bits first but can it override term definitions and what’s the effect on language, base, and vocab.
… The result sort of considered an atomic context such that if it did adhere to some type-scoping or partial type-scoping behavior, does part of it go away, some of it, those are the things I need to understand.
Rob Sanderson: Dave … the propagation point was from VC … what was the expectation?
Dave Longley: for external would behave in the same way as external ones would today
… the context would only apply to the type
… and it would follow property scope behavior
… so their should be consistency with how things happen today
Gregg Kellogg: Good.
… If properties within a type-scoped context, they propagate only if used.
Dave Longley: Yes.
Action #1: write up proposed syntax and functionality for @src/@propagate (Rob Sanderson)
Rob Sanderson: Unless the propagate flag is set to true.
Gregg Kellogg: There seem to be two different concerns, one is embedded contexts and the other is propagation.
Rob Sanderson: Yes, we’re complicating it. But I don’t think there’s another solution.
… We need something like this… the other option is to always propagate but then that’s what wont work for the VC folks.
Gregg Kellogg: The other option is to have a type-scoped context that sets propagate to true and then it’s not removed when we go out of the node object. If we have referenced contexts then it’s as if that context were inserted through some process into the referencing one.
… Well, what is the effect of property scoped contexts on embedded contexts?
… What is the effect of @propagate on property-scoped contexts or referenced contexts.
Pierre-Antoine Champin: If I understand correctly, that’s the kind of thing Rob is planning to do.
Rob Sanderson: Yes, exactly.
Pierre-Antoine Champin: I want to use the original JSON-LD 1.0 annotation context as a type-scoped context, but since it assumes propagation, I want a way to override the type-scoped behavior which is not to propagate.
Rob Sanderson: If there is some other way to do that, that’s perfect, totally fine to do that.
… I don’t understand how they are completely orthogonal, then we don’t need @source.
Gregg Kellogg: I think you need @source because you need to be able to pull in the definitions from an external context so that you can assert 1.1 types of things about it.
… I follow that.
… I can see that you might use @propagate true on one that doesn’t reference an external source and you use @source because might want to assert things about that context like SRI.
… For your use case you need both of these bits but their behavior is … we could create test cases that explore the various different uses and test cases for external referencing, and we should have a test case that combines the two. Largely their impact is orthogonal.
Rob Sanderson: I think we’re in violent agreement.
Pierre-Antoine Champin: Here’s an idea. The problem seems to come from the fact that you’re trying to use the Web Annotation context in a place where it was not designed to be used. It’s a 1.0 context. There are not scoped contexts, only local ones, no scoped ones.
… In a way it makes that it doesn’t quite fit in this position. Wouldn’t a solution be to have a dedicated version of the Web Annotation context that would be appropriate to be embedded as a type-scoped context?
… Maybe the solution is not to change the spec but to change the context that you use in this use case?
Rob Sanderson: But to go back to the definition of @propagate can we say on an @type, @propagate true?
Gregg Kellogg: Yes.
Rob Sanderson: If we anticipate that the major schemas that are in use via context referencing… annotations would be one, schema.org etc. … if they are going to go to 1.1 and they can set propagate or not that would be one other way to do it.
… It could be defined locally somewhere until they do. But yeah.
… It seems a bit of a stretch to say that if you want to use this 1.1 feature then because of this weird rule that type-scoped contexts don’t behave like property-scoped contexts that you can’t use any of the 1.0 contexts.
… The flip side would be that @propagate true is the default and then 1.1 contexts that want to turn it off can set @propagate: false.
Ivan Herman: How many contexts are we talking about that are really widely known and would have to be updated in this sense?
… What are talking about? We are hearing about two or three possible contexts right now, which is just peanuts.
Rob Sanderson: I’m not sure that we know, I would say schema.org, annotations, maybe ldp.
Ivan Herman: schema.org might not be easy to change, but the others are peanuts.
David Newbury: Is it everything that has included those that also have to be updated at this point?
Rob Sanderson: Assuming that there’s a different 1.1 context I think that’s ok, you’d reference that.
Gregg Kellogg: I think it’s dangerous road to assume that we know the impact of all the contexts that are out there and the solution is to just update those contexts. Particularly if it requires that they adhere to 1.1 and the toolchains don’t get updated immediately after we release the spec.
… Maybe the safest thing is to change the semantics to allow the propagation semantics to default to true but allow for false.
… It allows that propagate to be used in other contexts as well. I think there’s a use for referencing to be able to do things like that, but you might want to use a 1.0 context and not have it propagate. Then you’d use an envelope with @propagate but then no weird stuff.
Benjamin Young: Rolling out your contexts and managing multiple versions is an ambient concern. I don’t mean to derail your conversation. Our smaller more tightly knit communities aren’t going to face this as badly. But any of the ones that are actually doing deployments of other people’s vocabularies are going to be up a creek.
… I’m not sure we yet have any vehicle to help them survive. This is taking them to another level, incompatibility concerns.
Rob Sanderson: This would be an argument in favor of having the default to be to propagate rather than to not propagate?
Benjamin Young: It may not really even matter because of the way we’ve used versioning it doesn’t really matter.
… As soon as that gets stuck into anything you will have to shift to supporting two different ecosystems.
… We’ll have that for an unknowable amount of time.
Rob Sanderson: In terms of the VC side of things, requiring the context to turn off @propagate a hardship?
Dave Longley: so, VC spec goes into PR on Tuesday
… everyone has written their tests against the context that does not use @propagate false
… so that would be the main concern
… it’s a major timing issue
… if we miss PR, the VC spec would fail
… the other features from 1.1 don’t compose
… so…it’d be strange to have things not work and then have to go find the @propagate term to make things work
… it seems to me that once you pull in a 1.0 context
… that’s been interpreted in a 1.0 scenario
… folks will have to be ready for the meaning changes
… if they’re processed both as 1.0 and 1.1
… I understand the desire to make them play nice
… but I’m not sure about what we’d give up to keep that happening
Gregg Kellogg: I’d say maybe the way forward is to add a @propagate keyword which changes the behavior of the context it’s in to not survive the node object it’s used within, but we don’t the default behavior for type-scoped contexts. We can add @propagate: true to allow to survive or @propagate: false on a property-scoped or embedded context to allow it to be removed. It gives us the ability to not mess up the expectations of VCs.
David Newbury: A lot of the @propagate: true/false default is whether you’re coming from a programming background or a JSON-LD background.
Rob Sanderson: What about if 1.0 contexts were treated that all had an implicit @propagate: true on them.
… When a 1.0 is imported, all of the classes in that context are treated as if they had @propagate: true defined on them because that was the expectation.
Pierre-Antoine Champin: are we taking { "@context": { "Foo": {"@context": { "@propagate": true, ... } } } or { "@context": { "Foo": {"@context": { ... }, "@propagate": true } }
Pierre-Antoine Champin: I’m not sure Rob’s suggestion. The difference is subtle – is the @propagate flag supposed to occur in the context or next to the term definition.
Gregg Kellogg: Inside the context.
Pierre-Antoine Champin: I don’t understand Rob’s position then.
Rob Sanderson: The primary mismatch is that between contexts defined in 1.0 days, there wasn’t any scoping, once defined it’s always true. That remains true for property-scopes but not for type-scopes. In 1.1 we want to be able to override that default. We want to be able to have it be explicitly set so a particular class does propagate.
… The issue then is … a 1.0 context where it’s not a valid keyword, how can we have propagation be true. Given that the expectation in 1.0 was that everything propagated, that when a 1.0 context is imported, we should assume that there was a flag that propagate was set to true for that context. We don’t have to put it into the referring context – if that was just the way that it always worked. If you want to have a 1.1 context that imports
Dave Longley: other contexts with propagate false then that’s fine you don’t have to set anything.
Rob Sanderson: It would matter if you want a 1.0 to come in and not have it propagate.
… But that seems even more marginal than the inverse.
… I don’t think we need @source at all if we do that. We can just define @propagate with the notion that a 1.0 context acts as if it is true.
David Newbury: This would need a very big explanation note somewhere because I don’t think anyone pays attention to @version and having things operate differently seems very confusing.
Benjamin Young: And the fact that the same context could change its version under the hood changing how it propagates.
Ivan Herman: +1 workergnome
Gregg Kellogg: I’m concerned about that too and it’s possible to use 1.1 features without saying @version in the context.
… Trying to infer things after the fact that we do things differently I think is fraught. I think solution is to be explicit in the wrapper and to set propagation in the referencing context.
Ivan Herman: I am acting now as administrator because the minutes will be confusing, I have the impression we’re discussing 174 but started with 108. I would add the comments on both of them, and I’m not sure where we are.
… Administratively I think 174 is just being reopened now.
Benjamin Young: I think where we are – this will be the topic that we discuss next week and I’ll send out the same agenda.

@azaroth42
Copy link
Contributor

Proposal:

Allow the value of @context to be a dictionary that includes exactly two (defined) member properties, @src and @progagates.

  • The value of @src is a string that is the URI of an external context to be processed, as if it were encountered as a bare string as the value of @context.
  • The value of @propagates is a boolean. If set to true, then all of the classes in the referenced context should be considered as if they had this flag set on them.

Allow a new keyword @propagates within a context root node and within a class definition.

When @propagates is encountered at the root node of a context document, then all classes that are defined within the context are treated as if they had the keyword assigned to the supplied value.
[in the same way as @protected works]

When @propagates is encountered within a class definition, and it is set to true, then this counteracts the rule described in 4.1.7 as

A context scoped on @type is only in effect for the node object on which the type is used; the previous in-scope contexts are placed back into effect when traversing into another node object.

And instead means that when that class is encountered in a type scoped environment, the current context still propagates, as it would have if @context were set in the instance data.

Context Examples:

{
  "Annotation": {
    "@id": "wa:Annotation",
    "@context": {
      "@src": "http://www.w3.org/ns/anno.jsonld",
      "@propagates": true
    },
    "label": {"@id": "rdfs:label", "@container": ["@language", "@set"]}
  }
}

The Annotation context should be imported in a scoped way within instances of Annotations. The resources referenced in the JSON tree from that annotation should continue to inherit the definitions of the context, instead of the changes being scoped solely to the Annotation instance. This functionality allows 1.1 contexts importing 1.0 contexts to require that the propagation model of 1.0 is respected.

{
  "Annotation": {
    "@id": "wa:Annotation",
    "@propagates": true
  }
}

If this class is encountered as part of a type scoped context, then the definitions continue to propagate to the resources referenced in the JSON tree below it. This allows 1.1 contexts to continue to use the 1.0 propagation model, as if the @context reference were inline within the instance data, rather than as imported within the context definition. Defining it per class allows some classes to behave in 1.1 propagation mode and some in 1.0 propagation mode at the same time.

@iherman
Copy link
Member

iherman commented Jun 22, 2019

@azaroth42, thanks

Two things:

  • it would help me at least to see real data using these constructions in terms of the triples that are generated. Could you add one that shows the difference between using and not using this flag?
  • (beware! Bike-shedding attack!) If my understanding is correct, the term @propagates is a bit of a misnomer. What is (or is not) propagated are the terms in the "upper level" in-scope context terms, and not the terms that are in the types and context file that this flag is used on. The right term is something like @allow_propagation, but this is a mouthful... (Of course, my understanding may be wrong, in which case this comment is moot.)

@gkellogg
Copy link
Member

Proposal:

Allow the value of @context to be a dictionary that includes exactly two (defined) member properties, @src and @progagates.

It will also need to include "@version": 1.1 to not be misinterpreted by a 1.0 processor.

  • The value of @src is a string that is the URI of an external context to be processed, as if it were encountered as a bare string as the value of @context.
  • The value of @propagates is a boolean. If set to true, then all of the classes in the referenced context should be considered as if they had this flag set on them.

Allow a new keyword @propagates within a context root node and within a class definition.

"Class definition"? Do you mean as the embedded context in a term used as a value of @type?

When @propagates is encountered at the root node of a context document, then all classes that are defined within the context are treated as if they had the keyword assigned to the supplied value.
[in the same way as @protected works]

So, it's not recursive? Seems we would need to go into a state to specifically check for this. Also, that seems like it's placing behavior for @src using @ propagates , which would seem to me to change the behavior of the context when exiting a node-definition vs uplift term definitions (and other context things) to the context including the reference to @src.

When @propagates is encountered within a class definition, and it is set to true, then this counteracts the rule described in 4.1.7 as

A context scoped on @type is only in effect for the node object on which the type is used; the previous in-scope contexts are placed back into effect when traversing into another node object.

And instead means that when that class is encountered in a type scoped environment, the current context still propagates, as it would have if @context were set in the instance data.

+1, but it probably also has a converse meaning of set to false in a context (scoped, or otherwise), to be consistent.

...

@gkellogg
Copy link
Member

I think @azaroth42's suggestion might be a bit narrow, and we might want to consider the following:

  1. If @src appears within a context object, the referencing context must contain @version: 1.1.
  2. The value of @src must be an string interpreted as a URL.
  3. The behavior of @src is treated as if the referenced context were merged with the referencing context, with all term definitions from the referencing/including context taking precedence over those in the referenced context.
  4. The presence of @propagates overrides the default propagation of the context outside of the containing node object. By default, propagates is true for type-scoped contexts, and false otherwise.
  5. The specific type-scoped context rules for propagation are updated to be based on the propagates property of the specific context.

This separates the notion of @src and @propagates, and creates a consistent rule for how to merge @src into a referencing context (potentially, allowing for reclusive @src in remote contexts, although this is a consequence of the implementation, rather than a specific objective).

@dlongley
Copy link
Contributor Author

@gkellogg,

By default, propagates is true for type-scoped contexts, and false otherwise.

Did you mean the reverse of this?

@gkellogg
Copy link
Member

@gkellogg,

By default, propagates is true for type-scoped contexts, and false otherwise.

Did you mean the reverse of this?

Yes, indeed.

@gkellogg
Copy link
Member

Also, as I said in the meeting, I think that @src is inconsistent with our keyword naming, and would prefer @source.

@dlongley
Copy link
Contributor Author

dlongley commented Jun 24, 2019

If this is going to be true:

The behavior of @src is treated as if the referenced context were merged with the referencing context, with all term definitions from the referencing/including context taking precedence over those in the referenced context.

Then it seems like @import does make sense as the keyword name.

@gkellogg
Copy link
Member

Perhaps, but it depends on which has the least impact on algorithms, I think. Doing the strict @source-@propagate (along with @version) seems like a special case which will require a totally separate branch in the algorithm, while @source/@import seems like potentially a 1-line change.

I'm implementing now, and will have more to say later.

@pchampin
Copy link
Contributor

Sorry, but I find all this quite complicated...
Here are two alternate proposals:

Proposal A

  • introduce a new @propagate keyword, which is allowed in any context node (root or scoped), and expects a boolean
  • a scoped context with @propagate set to true will propagate when descending
  • a scoped context with @propagate set to true will only be active in its scope node
  • in a type scoped context, @propagate defaults to false, unless when referenced from a URI, and the remote context does not have any @version member (i.e. 1.0 external context
  • in all other situations, @propagate defaults to true

Proposal B

Same as proposal A, but remove the exception about type scoped contexts.

I know this would make things harder for VC, but it makes things easier to implement and to explain...

@dlongley
Copy link
Contributor Author

dlongley commented Jun 28, 2019

-1 to Proposal B that would cause JSON-LD 1.1's new features to not compose by default and be unexpected for the first-order constituency of JSON developers/users.

I think Proposal A is either what @gkellogg is experimenting with in his implementation (or is close to it).

Note that there are additional fixes we needed to apply to type-scoped context processing to make them behave as expected and round-trip properly. There are real differences in how they are expected function as opposed to other contexts -- which is fine; they are a good and very useful feature that give us better alignment with idiomatic JSON. But we shouldn't forget those differences exist and that we must account for them in order to make them behave as expected. Those differences are baked into how people already think about JSON, so our processing rules must reflect that.

For example, when @type is used within a type-scoped node object, its values are compacted according to the previous context, not according to the type-scoped context.

For example, consider the case where a type-scoped context is cleared:

{
  "@context": {
    "@version": 1.1,
    "collection": "ex:collection",
    "MyType": {
      "@id": "ex:MyType",
      "@context": [null, {
        "foo": "ex:foo"
      }]
    }
  },
  "collection": [{
    "@id": "ex:some_id",
    "@type": "MyType",
    "foo": "bar"
  }]
}

Under this scenario, "MyType" would, quite unexpectedly, lose its meaning and not round trip if type-scoped contexts weren't given special treatment.

@iherman
Copy link
Member

iherman commented Jun 28, 2019

This issue was discussed in a meeting.

  • No actions or resolutions
View the transcript 4.1. Continuing discussions from last week around “propagates”
Benjamin Young: #174 (comment)
Gregg Kellogg: There’s also a PR w3c/json-ld-api#112
Gregg Kellogg: based on Rob’s proposal, but instead of “@src”, uses “@source”…
@propagates defaults to True except for type scoped contexts, can be overrriden in either case
Benjamin Young: focus on @propagates for now
Dave Longley: #174
Dave Longley: propagate makes sense to me, but there other considerations in type scoped contexts…
… I didn’t check whether gkellogg’s implementation addresses these.
… previous contexts can now be any context, including type scoped, where changes can occur underneath
… have to make sure that @type gets evaluated using previous contexts
… correct keyword sb @import instead of @source
… feature makes a lot of sense to bring ld 1.0 contexts into the 1.1 fold without having to rewrite
… may not make a lot of sense to import 1.1 context, so maybe we should focus on @import 1.0 context
Gregg Kellogg: May need more tests. Checks in compaction and expansion … if type scoped context is overridden to
@source vs. @import - separate discussion. Should discuss SRI types as well
Dave Longley: It would be unexpected to evaluate @type against type scoped context – it would break round-tripping…
… expectation is that typed value will always be evaluated against previous context.
Gregg Kellogg: can dave represent concern in issue or PR?
Dave Longley: #174 (comment)
Gregg Kellogg: if you try to round-trip example above, it would behave as expected. If, however, we were to process @type using
… type scoped context, it would destroy its own type, which would be unexpected.
Ivan Herman: example is drastic, but even if you have a type definition in the scoped context, how does it relate to the type in the enclosing context?
Gregg Kellogg: Prior to PR, worked the way that was expected. The way to update w/ @propagates, would be to add @propagates to second embedded context….
… but question is what is controlling propagation. We need to flesh this out to understand what adding @propagate true to second context
… need to preserve processing chain independent of propagation
Benjamin Young: … appears to be consensus developing around PR
Ivan Herman: I am worried about syntax specification wrt. @propagates that is understandable to user. Would like to see PR that makes this clear spec-wise
… before I would accept API PR, I would like to seen syntax PR w/ examples
Dave Longley: @import speaks to what we can say in the spec, wrt. using @propagate for pulling LD 1.0 and protecting it
Ivan Herman: we need to see the whole story
Gregg Kellogg: JSON-LD 1.0 evolved by thinking of feature, implement and then describe syntax. Approach of syntax and then implementation is difficult. Would prefer to meet in the middle…
… would like to use sample implementation and examples to see whether this is the direction we want to to, followed by syntax spec.
Dave Longley: i’ve also thought about JSON-LD as … “here’s a feature JSON devs want/use to describe their JSON … how do we implement something to express the semantics in there properly?”
Gregg Kellogg: implementation allows us to decide what we prefer before syntax document. Lets not put this on hold
Dave Longley: +1 that both sides are important … we need to be able to describe the syntax simply enough and be able to do things in the implementation to demonstrate it’s even possible
Ivan Herman: You can get situations where awareness of implementation provides clarity, but if you aren’t familiar with the details it may not make a good story to the end user
… would like a clear story defined in document before we do the whole thing.
Dave Longley: +1
Gregg Kellogg: if you look at a grammar such as turtle or sparql, if you don’t take parsing issues into account, you’ve done a disservice. Advocate both ends
Dave Longley: +1
Ivan Herman: Need PR for syntax
Gregg Kellogg: need to agree on @imports vs. @source semantics before we do this in a syntax document. API helps us consider that
Dave Longley: If we do @import, can we add @source in the future? Would you just put both tags?
Gregg Kellogg: Could be done either way - integrity (SRI) becomes a modifier to source URL. In the presence of @sri, that value is extracted and passed to algorithm for evaluating ressults…
… would not import an array of things, so maybe @source makes this clear.
Benjamin Young: Is this more than bike shedding? Two different modes as represented by @source vs. @import. We should focus on semantics, not names.
… two terms represent two semantic categories w/ different behaviors.
Dave Longley: my view on the semantic difference: do you “update” a context you bring in (@import) or are you just making meta data assertions on it (@source) … not everyone will agree, maybe @source can do both.
Benjamin Young: importing a 1.0 context w/ a small 1.1 wrapper sounds “dreamy”…
Gregg Kellogg: agree w/ dlongley’s summary – the diffference between pulling a context in vs. referencing it. @import semantics that allow potential modification makes more sense to me
Dave Longley: using array is “process these contexts in this order”, while @imports allows re-use and modification of existing contexts
Benjamin Young: {"@context": {"@import": "http://...anno.json", "name": "https://schema.org/name"}} <– not a thing? guessing we should clarify the new limitations on @context in general…
Benjamin Young: this substantively changes what is in @context
Dave Longley: ivan brought up issue w/ @protected where people wanted to override schema.org context elements. If terms had been protected, override would fail…
@import would allow changes before it gets defined.
Harold Solbrig: @bigbluehat: how does @import jive with verifiable credentials, etc.
Dave Longley: yes - this should not run afowl of the rules, as it would allow tweaking. What you can do is add on to array and pull in existing contexts and make them compatible with core contexts defined in specs
Ivan Herman: We need this story. I would like to see it written down and spelled out.
Dave Longley: Can do this, but can’t commit to timing
Dave Longley: “update your context before it is processed … as if the term definitions were always that way”
Benjamin Young: schema.org may change on us in the future, but maybe text –> iri change would make a good example. Does not mean that google will understand what you’ve done…
Ivan Herman: schema.org may not be a good idea. foaf?
Gregg Kellogg: I can create an example of modifying a term. @protected may require more work – another reason that @imports works better vs. @source
@source and (possibly) @propagates w/ nothing else (except version) allowed?
… if you pull and modify a context, you are @importing it but @source wouldn’t allow mods.
… but question is whether SRI could apply to imports or …
Ivan Herman: My understanding is that SRI refers to the context I identify w/ a URL, whether used in import or source isn’t a big problem
… rob’s original proposal seemed to be simpler – we don’t know whether he would support this or not.
Gregg Kellogg: will work on syntax document and changes to PR
Dave Longley: I’m thinking this shouldn’t be used for embedded contexts, in either @source or @imports situation
Benjamin Young: {"@context": {"@import": "http://...anno.json", "name": "https://schema.org/name"}} <– not a thing?
Benjamin Young: The above should not be allowed?
… Leave github issues as they are…

@gkellogg
Copy link
Member

gkellogg commented Jul 6, 2019

One issue I'm running into is the treatment of @protected when combined with @source. One use case would certainly be to source a context and cause all of its term definitions to be protected, but also allowing other term definitions in the wrapping context to override these terms results in an error, since only when being defined from a property (no using override protected) are such redefinitions allowed. We could enable this option if the context includes @source, but that could inadvertently allow terms that were defined in previous protected contexts to be overridden. There's really no easy way to limit this to just those terms which were defined in the sourced context.

This may be just an "oh, well ...", or perhaps we need to restrict the enclosing context from defining any term definitions, which was @azaroth42's original proposal, but I could see, for example, using schema.org, protecting the term definitions, but changing something like schema:identifier to have to be "identifier: {"@id": "http://schema.org/identifier", "@type": "@id"} rather than the default, which is missing @type.

Ideally, this would allow something like the following:

{
  "@context": {
    "@version": 1.1,
    "@protected": true,
    "@source": "https://schema.org/",
    "identifier": {"@id": "https://schema.org/identifier", "@type": "@id"}
  }
}

@dlongley
Copy link
Contributor Author

dlongley commented Jul 7, 2019

@gkellogg,

One use case would certainly be to source a context and cause all of its term definitions to be protected, but also allowing other term definitions in the wrapping context to override these terms results in an error, since only when being defined from a property (no using override protected) are such redefinitions allowed.

My view of how @source/@import should work is that the terms are not defined until the wrapping context is processed. This means that any term definition that is expressed in the wrapping context wipes out any term definition from @source/@import before it is defined, avoiding any term processing issues like the above at all. I'm thinking of @source/@import plus a wrapping context working more like the object spread operator in JavaScript or its Object.assign method.

@dlongley
Copy link
Contributor Author

dlongley commented Jul 7, 2019

So, to process @source/@import, first you fetch its URL value as a document via a document loader, then you parse it to get an unprocessed local context (a Map, really). Then you merge every entry in the wrapping context into that Map, replacing as needed. Finally, you do context processing on the result.

@gkellogg
Copy link
Member

gkellogg commented Jul 7, 2019

That would make it work, but seems like a big change to processing algorithms. Right now, it’s all about processing a local context on top of an active context. Deferring processing is challenged by the potential shape of a referenced context, array, more remote contexts, etc? They also need to be based on the active context.

We could restrict the referenced context to be in the form of a Map, but that’s a slippery slope.

Another way would be to pass something to the algorithm to tag all terms created from the sourced/imported context so we could detect that they can be overridden within the local context containing the source/import. Whatever we do, there’s a fair impact on the context processing algorithm.

@dlongley
Copy link
Contributor Author

dlongley commented Jul 7, 2019

@gkellogg,

That would make it work, but seems like a big change to processing algorithms. Right now, it’s all about processing a local context on top of an active context.

The way I think about it is that context processing itself doesn't change much other than adding an additional step that handles @source/@import first, to "construct" the local context before it is processed. To me, this is not unlike how we must first use a document loader to retrieve a local context that is referenced via a URL. So it operates at a different layer than context processing "proper". Before you can process a local context, you:

  1. Dereference it if it's referred to by a URL.
  2. Dereference its @source/@import if present and merge the wrapping context into it.

So context processing itself would be "deferred" as you state next.

Deferring processing is challenged by the potential shape of a referenced context, array, more remote contexts, etc? They also need to be based on the active context.

We could restrict the referenced context to be in the form of a Map, but that’s a slippery slope.

If the value of @context in the retrieved document is an array, we could apply the wrapping context to the last context in that array. I think there would be very limited use in trying to do any more than that.

Another way would be to pass something to the algorithm to tag all terms created from the sourced/imported context so we could detect that they can be overridden within the local context containing the source/import. Whatever we do, there’s a fair impact on the context processing algorithm.

I think deferring processing could be less messy, similar to document loading, and, if it works, could also potentially match the language we use to describe how the feature works. When you @import a context, it's like editing it inline to create a new local context (per whatever changes you make in the wrapping context) before it gets processed.

@dlongley
Copy link
Contributor Author

dlongley commented Jul 7, 2019

Also, I think deferred processing better matches what @azaroth42 and others would like to do. They want to avoid having to copy and paste an entire context and make a few edits to it so it can be processed with those edits. @import would give them a feature to do it -- and it would work, internally, precisely as if they had done it in the more tedious way.

@gkellogg
Copy link
Member

gkellogg commented Jul 7, 2019

Deferring this way does change the semantics of processing, consider the following:

Remote Context:

{
  "@context": {
    "@vocab": "http://remote.example.com/",
    "foo": {"@type": "@id"}
  }
}

Local context:

{
  "@context": {
    "@version": 1.1,
    "@source": "Remote",
    "@vocab": "http://local.example.com/"
  }
}

"foo" was would have been "http://remote.example.com/foo" if processing the remote context, but is "http://local.example.com/foo" if processed is deferred and the map resulting from processing @source is used to fold in the outer-most local context. There are a number of similar things that would affect the semantics.

Furthermore, if the remote context includes a URL itself (e.g. {"@context": ["ReallyRemote", {..}]}), you need separate logic to look for remote context overflow, and if you don't process the contexts in order, then each context could be interpreted differently vs. the deferred mechanism.

If we do that we should probably caveat, if not mandate, that the remote context must be a simple map-like local context structure, and caution that the scope of @vocab/@language/@protected along with term definitions that are used in other term definitions, could have a different result. Mandating that such contexts do not result in such confusion would require a number of new tests to test for each combination.

@azaroth42
Copy link
Contributor

azaroth42 commented Jul 7, 2019

If there is such a restriction to only allow direct mapping contexts, it would invalidate many contexts for inclusion in this way... rather defeating much of the point.

foo is clearly meant to be http://remote.example.com/foo, rather than whatever the local @vocab is set to.

So I agree with @gkellogg on the deferred processing vs regular processing.

I also (as one might expect) am 👍 to Proposal B. This is an expert feature, not something that most people will use in their daily json-ld lives. If the context writing is slightly harder, that's a relatively small price to pay.

@gkellogg
Copy link
Member

gkellogg commented Jul 7, 2019

@azaroth42 I think you need to clarify your support of Proposal B. That would say that type-scoped contexts propagate by default, which would certainly be a big problem for Verifiable Claims and quite arguably not what people expect from type scoping.

What's in the PR works fairly well, I think, and is essentially Proposal A (although not the third statement: "a scoped context with @propagate set to true will only be active in its scope node". I think this was intended to be when @propagate is set to false).

@azaroth42
Copy link
Contributor

Yes - I'm not going to lie down in the road for it, but I think that the argument that object-oriented developers would expect it is weak ... they would also expect inheritance and a closed world, neither of which we have. It's also not what anyone used to writing JSON-LD would expect from 1.0, which is going to be the majority of context authors as opposed to users of the resulting data.

Again, which ever way works such that we can fulfill the use cases is fine by me. If that's A ... great. If that's B ... great.

@dlongley
Copy link
Contributor Author

dlongley commented Jul 8, 2019

@gkellogg,

"foo" was would have been "http://remote.example.com/foo" if processing the remote context, but is "http://local.example.com/foo" if processed is deferred and the map resulting from processing @source is used to fold in the outer-most local context. There are a number of similar things that would affect the semantics.

This is actually exactly what I would expect given the feature. This is the only way to "inline" edit and reuse an existing context. If you wanted @vocab to take effect after context processing, we already have a method for that and you'd do this instead:

{
  "@context": [{
    "@version": 1.1,
    "@source": "Remote"
  }, {
    "@vocab": "http://local.example.com/"
  }]
}

Adding @import provides a new feature (inline selective editing of existing contexts) that didn't previously exist. This approach seems to be a useful feature, one that would solve the original requirements, and it would be more easily understood via general principles vs. a feature that is "just for" @propagate, etc.

If we do that we should probably caveat, if not mandate, that the remote context must be a simple map-like local context structure...

While I'd be ok with that restriction, I do think it would be interesting to explore how challenging it would be to "carry through" a flag that would apply the wrapping context to the "last dereferenced context" in any series of context arrays that might be dereferenced -- to establish the final local context prior to processing. That approach would seem to match the goal of the feature.

@gkellogg
Copy link
Member

gkellogg commented Jul 8, 2019

Okay, that argument makes sense. I can update the PR to do the merge as you suggest, which solves the protected problem. I do believe we should restrict the shape of the referenced context to be a Map/Dictionary, as opposed to an array, or string. This covers pretty much every real-world use case and avoids unnecessary complication.

@gkellogg
Copy link
Member

I've updated w3c/json-ld-api#112 with what I think we want for behavior, with the value of @souce being a string that references a remote JSON-LD file with an @context who's value is an object, which must not have @source, itself. This is reverse-merged into the referencing context, which allows things in the sourced context to be "edited" by the referencing context (including term definitions, @vocab, @protected, and so forth).

It would be straight-forward to undo the implied lack of propagation of type-scoped contexts, but I think we should separate that and consider it on a call.

Please give it a look and 👍 or 👎. Based on that, I can further describe it in the syntax document with a separate PR.

gkellogg added a commit that referenced this issue Jul 11, 2019
gkellogg added a commit that referenced this issue Jul 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment