Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates to add @propagate and @import #112

Merged
merged 10 commits into from
Jul 12, 2019
Merged

Updates to add @propagate and @import #112

merged 10 commits into from
Jul 12, 2019

Conversation

gkellogg
Copy link
Member

@gkellogg gkellogg commented Jun 24, 2019

  • Text and tests for @propagate and @source

For w3c/json-ld-syntax#174.


Preview | Diff

@gkellogg gkellogg changed the title [WIP] Updates to add @propagate and @source Updates to add @propagate and @source Jun 26, 2019
@gkellogg gkellogg requested review from azaroth42 and pchampin June 26, 2019 22:40
@gkellogg
Copy link
Member Author

Added support for @source and expansion tests. (Could do compaction tests, but I there is no change in that execution path.

Note that this version allows @source in addition to other elements of a context. It could be updated to restrict other elements within @context with some extra steps, but I think there's value in being able to reference a context and potentially modify some of the terms as well. Processing considers @source immediately after @version (which was moved up before @base).

@gkellogg gkellogg requested a review from dlongley June 26, 2019 22:43
Copy link
Contributor

@pchampin pchampin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The proposal "works", but I am still a bit concern about the overhead that it adds to the standard as a whole. Granted, it does not make the algorithms much more complex. But the ramifications of @source do worry me (as expressed in w3c/json-ld-syntax#174 (comment)).

For example, why couldn't source accept an array of strings?

Also, the original idea of introducing @source was to allow, in the future, for adding some metadata in context references, like:

{ "@context": { "@source": "...", "@sri": "123456abcdef" }, ... }

but what would the @sri property mean then if @source was co-existing with term definitions? E.g.:

{ "@context": { "@source": "...", "@sri": "123456abcdef", "foo": "http://example.org/foo" }, ... }

My initial idea was that {"@source": "some:iri"} was somehow equivalent to "some:iri" (in the context of a @context member), that would allow for additional metadata about that IRI. Nothing more.

In the current proposal, @source is a general purpose import mechanism, and I would then agree with @dlongley's proposal to call it @import. Except that I'm not sure we need it in the first place.

@iherman
Copy link
Member

iherman commented Jun 28, 2019

Per @pchampin

My initial idea was that {"@source": "some:iri"} was somehow equivalent to "some:iri" (in the context of a @context member), that would allow for additional metadata about that IRI. Nothing more.

That was also my view...

@gkellogg
Copy link
Member Author

The proposal "works", but I am still a bit concern about the overhead that it adds to the standard as a whole. Granted, it does not make the algorithms much more complex. But the ramifications of @source do worry me (as expressed in w3c/json-ld-syntax#174 (comment)).

For example, why couldn't source accept an array of strings?

Also, the original idea of introducing @source was to allow, in the future, for adding some metadata in context references, like:

{ "@context": { "@source": "...", "@sri": "123456abcdef" }, ... }

but what would the @sri property mean then if @source was co-existing with term definitions? E.g.:

I don’t think this creates a problem, the @sri applies to the intermediate result of processing @source.

{ "@context": { "@source": "...", "@sri": "123456abcdef", "foo": "http://example.org/foo" }, ... }

My initial idea was that {"@source": "some:iri"} was somehow equivalent to "some:iri" (in the context of a @context member), that would allow for additional metadata about that IRI. Nothing more.

In the current proposal, @source is a general purpose import mechanism, and I would then agree with @dlongley's proposal to call it @import. Except that I'm not sure we need it in the first place.

I thought this was a good starting point to avoid complicating the algorithm. We can always decide to add more restrictions if @source is present, such as restricting other members of the local context from including members other than @version and @propagate.

@dlongley
Copy link
Contributor

So if @source allows for modifying the incoming context, I think it should be called @import. It seems like this better matches Rob's use case for "constructing a context from another one with modifications".

Also, I suspect that this feature is really restricted to pulling in an existing JSON-LD 1.0 context for modification and use with JSON-LD 1.1. I don't see how it would be that useful to import a JSON-LD 1.1 context that might have scoped-contexts scattered throughout it -- whereby modifications like setting @propagate wouldn't deeply apply (or it would be too difficult to target them without redefining entire terms).

To me, it seems like this feature is really just about reusability, for "importing an existing JSON-LD 1.0 context and applying some JSON-LD 1.1 changes". Changes that might be useful include @propagate and @protected ... and any other keyword we have that globally modifies term definitions in a context. Perhaps we should consider it largely for that purpose, call it @import, and leave the @sri and @source stuff for another day.

@pchampin
Copy link
Contributor

@gkellogg wrote:

I don’t think this creates a problem, the @sri applies to the intermediate result of processing @source

That would be quite misleading if the context containing @sri contains its own term definitions...

@dlongley

To me, it seems like this feature is really just about reusability, for "importing an existing JSON-LD 1.0 context and applying some JSON-LD 1.1 changes". (...) Perhaps we should consider it largely for that purpose, call it @import, and leave the @sri and @source stuff for another day.

Even if the purpose is to "upgrade" a 1.0 context, I don't see the point of adding additional term definitions in it. Seing {"@source": "some:iri"} as an extensible version of "some:iri" meets this requirement.

But I don't even see a requirement for "upgrading" or modifying a 1.0 context. @azaroth42's concerns was that, with the way type-scoped contexts now work, 1.0 contexts would give unexpected results (compared to the expectation by their authors that they always propagate). By deciding that @propagate defaults to true in any context with no "@version" member, I think we solve this. No need to be able to alter any context "from the outside".

@iherman
Copy link
Member

iherman commented Jun 28, 2019

This issue was discussed in a meeting.

  • No actions or resolutions
View the transcript 4.1. Continuing discussions from last week around “propagates”
Benjamin Young: w3c/json-ld-syntax#174 (comment)
Gregg Kellogg: There’s also a PR #112
Gregg Kellogg: based on Rob’s proposal, but instead of “@src”, uses “@source”…
@propagates defaults to True except for type scoped contexts, can be overrriden in either case
Benjamin Young: focus on @propagates for now
Dave Longley: w3c/json-ld-syntax#174
Dave Longley: propagate makes sense to me, but there other considerations in type scoped contexts…
… I didn’t check whether gkellogg’s implementation addresses these.
… previous contexts can now be any context, including type scoped, where changes can occur underneath
… have to make sure that @type gets evaluated using previous contexts
… correct keyword sb @import instead of @source
… feature makes a lot of sense to bring ld 1.0 contexts into the 1.1 fold without having to rewrite
… may not make a lot of sense to import 1.1 context, so maybe we should focus on @import 1.0 context
Gregg Kellogg: May need more tests. Checks in compaction and expansion … if type scoped context is overridden to
@source vs. @import - separate discussion. Should discuss SRI types as well
Dave Longley: It would be unexpected to evaluate @type against type scoped context – it would break round-tripping…
… expectation is that typed value will always be evaluated against previous context.
Gregg Kellogg: can dave represent concern in issue or PR?
Dave Longley: w3c/json-ld-syntax#174 (comment)
Gregg Kellogg: if you try to round-trip example above, it would behave as expected. If, however, we were to process @type using
… type scoped context, it would destroy its own type, which would be unexpected.
Ivan Herman: example is drastic, but even if you have a type definition in the scoped context, how does it relate to the type in the enclosing context?
Gregg Kellogg: Prior to PR, worked the way that was expected. The way to update w/ @propagates, would be to add @propagates to second embedded context….
… but question is what is controlling propagation. We need to flesh this out to understand what adding @propagate true to second context
… need to preserve processing chain independent of propagation
Benjamin Young: … appears to be consensus developing around PR
Ivan Herman: I am worried about syntax specification wrt. @propagates that is understandable to user. Would like to see PR that makes this clear spec-wise
… before I would accept API PR, I would like to seen syntax PR w/ examples
Dave Longley: @import speaks to what we can say in the spec, wrt. using @propagate for pulling LD 1.0 and protecting it
Ivan Herman: we need to see the whole story
Gregg Kellogg: JSON-LD 1.0 evolved by thinking of feature, implement and then describe syntax. Approach of syntax and then implementation is difficult. Would prefer to meet in the middle…
… would like to use sample implementation and examples to see whether this is the direction we want to to, followed by syntax spec.
Dave Longley: i’ve also thought about JSON-LD as … “here’s a feature JSON devs want/use to describe their JSON … how do we implement something to express the semantics in there properly?”
Gregg Kellogg: implementation allows us to decide what we prefer before syntax document. Lets not put this on hold
Dave Longley: +1 that both sides are important … we need to be able to describe the syntax simply enough and be able to do things in the implementation to demonstrate it’s even possible
Ivan Herman: You can get situations where awareness of implementation provides clarity, but if you aren’t familiar with the details it may not make a good story to the end user
… would like a clear story defined in document before we do the whole thing.
Dave Longley: +1
Gregg Kellogg: if you look at a grammar such as turtle or sparql, if you don’t take parsing issues into account, you’ve done a disservice. Advocate both ends
Dave Longley: +1
Ivan Herman: Need PR for syntax
Gregg Kellogg: need to agree on @imports vs. @source semantics before we do this in a syntax document. API helps us consider that
Dave Longley: If we do @import, can we add @source in the future? Would you just put both tags?
Gregg Kellogg: Could be done either way - integrity (SRI) becomes a modifier to source URL. In the presence of @sri, that value is extracted and passed to algorithm for evaluating ressults…
… would not import an array of things, so maybe @source makes this clear.
Benjamin Young: Is this more than bike shedding? Two different modes as represented by @source vs. @import. We should focus on semantics, not names.
… two terms represent two semantic categories w/ different behaviors.
Dave Longley: my view on the semantic difference: do you “update” a context you bring in (@import) or are you just making meta data assertions on it (@source) … not everyone will agree, maybe @source can do both.
Benjamin Young: importing a 1.0 context w/ a small 1.1 wrapper sounds “dreamy”…
Gregg Kellogg: agree w/ dlongley’s summary – the diffference between pulling a context in vs. referencing it. @import semantics that allow potential modification makes more sense to me
Dave Longley: using array is “process these contexts in this order”, while @imports allows re-use and modification of existing contexts
Benjamin Young: {"@context": {"@import": "http://...anno.json", "name": "https://schema.org/name"}} <– not a thing? guessing we should clarify the new limitations on @context in general…
Benjamin Young: this substantively changes what is in @context
Dave Longley: ivan brought up issue w/ @protected where people wanted to override schema.org context elements. If terms had been protected, override would fail…
@import would allow changes before it gets defined.
Harold Solbrig: @bigbluehat: how does @import jive with verifiable credentials, etc.
Dave Longley: yes - this should not run afowl of the rules, as it would allow tweaking. What you can do is add on to array and pull in existing contexts and make them compatible with core contexts defined in specs
Ivan Herman: We need this story. I would like to see it written down and spelled out.
Dave Longley: Can do this, but can’t commit to timing
Dave Longley: “update your context before it is processed … as if the term definitions were always that way”
Benjamin Young: schema.org may change on us in the future, but maybe text –> iri change would make a good example. Does not mean that google will understand what you’ve done…
Ivan Herman: schema.org may not be a good idea. foaf?
Gregg Kellogg: I can create an example of modifying a term. @protected may require more work – another reason that @imports works better vs. @source
@source and (possibly) @propagates w/ nothing else (except version) allowed?
… if you pull and modify a context, you are @importing it but @source wouldn’t allow mods.
… but question is whether SRI could apply to imports or …
Ivan Herman: My understanding is that SRI refers to the context I identify w/ a URL, whether used in import or source isn’t a big problem
… rob’s original proposal seemed to be simpler – we don’t know whether he would support this or not.
Gregg Kellogg: will work on syntax document and changes to PR
Dave Longley: I’m thinking this shouldn’t be used for embedded contexts, in either @source or @imports situation
Benjamin Young: {"@context": {"@import": "http://...anno.json", "name": "https://schema.org/name"}} <– not a thing?
Benjamin Young: The above should not be allowed?
… Leave github issues as they are…

@gkellogg
Copy link
Member Author

gkellogg commented Jul 5, 2019

@dlongley the text still uses uses the previous context for expanding and compacting values of @type, regardless of propagation.

Could you suggest a test or two using @propagate combinations on property-scoped and type-scoped contexts that would ensure we're handling this properly?

@gkellogg gkellogg force-pushed the propagate-source branch from 11d9098 to 60a5c6a Compare July 6, 2019 20:44
gkellogg and others added 5 commits July 10, 2019 11:54
(cherry picked from commit 5c543e6)
Co-Authored-By: Dave Longley <[email protected]>
* Change _from type_: true to _propagate_: false and fold in related changes in context processing.
* Add _protected_ option to context processing and use it based on `@protected` when parsing a remotely sourced context.
* Add some tests for protected sourced contexts.
…result of dereferncing source.

Add expansion tests to verify that the containing context can effectively update the sourced context.
@gkellogg gkellogg requested a review from pchampin July 10, 2019 21:27
@rubensworks
Copy link
Member

The new expansion tests also look relevant for the toRdf tests, should they be added in this PR as well?

@gkellogg
Copy link
Member Author

@rubensworks Nominally, all expansion tests are useful for toRdf, but we haven't continued to add them all. The spec says that toRdf depends on expansion, so there is something implicit in this, but we have definitely done this before, so if there are tests you'd like to see in toRdf, I'm fine with that.

@rubensworks
Copy link
Member

@gkellogg Some processors may not strictly follow the specified algorithm (such as my own parser), and therefore may not depend on, or support explicit expansion. So in this case explicit toRdf test would definitely be good to have.
If that's ok with you, I'd be happy go over all the current expansion tests, and port those for toRdf when applicable.

@gkellogg
Copy link
Member Author

@rubensworks that would be great, maybe wait until the @import/@propagate PR is complete.

@gkellogg gkellogg changed the title Updates to add @propagate and @source Updates to add @propagate and @import Jul 12, 2019
@gkellogg gkellogg merged commit 816151c into master Jul 12, 2019
@gkellogg gkellogg deleted the propagate-source branch July 12, 2019 21:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants