Stable names for lambda lifted method and fresh names#6300
Stable names for lambda lifted method and fresh names#6300lrytz merged 5 commits intoscala:2.13.xfrom
Conversation
7b6d867 to
9527c93
Compare
|
/synch |
|
/rebuild |
|
This causes a binary incompatiblity with partest which is built (unwisely IMO) with the full Scala optimizer enabled. I'll release a new version of partest that only enables local optimizations. |
Otherwise we can inline bytecode public, but unstable, artifacts from the standard library, as was see in: scala/scala#6300 (comment)
273b5fd to
bee824a
Compare
|
Rebased on top of a new version of partest. |
bee824a to
ee06025
Compare
|
Rebased on the yet-to-be-merged #6391 to speed up the rebuilds this needs. |
|
The test failures suggest that the test suite under diff --git a/test/scalacheck/treemap.scala b/test/scalacheck/treemap.scala
index 6978ca3145..b6bcbf13a2 100644
--- a/test/scalacheck/treemap.scala
+++ b/test/scalacheck/treemap.scala
@@ -6,7 +6,13 @@ import Arbitrary._
import util._
import Buildable._
+import scala.reflect.internal.util.ScalaClassLoader.URLClassLoader
+
object TreeMapTest extends Properties("TreeMap") {
+ sys.error(
+ classOf[scala.collection.TraversableOnce[_]].getProtectionDomain.getCodeSource.toString + "\n" + getClass.getClassLoader + "\n" + getClass.getClassLoader.asInstanceOf[java.net.URLClassLoader].getURLs.toList.mkString("\n"))
+
+
def genTreeMap[A: Arbitrary: Ordering, B: Arbitrary]: Gen[TreeMap[A, B]] =
for {
keys <- listOf(arbitrary[A])This feels like a mole that I've tried to whack before. |
|
Forking fixes the classpath issue, but no longer reports failures: |
|
Reported our desire to leave STARR off the non-forked test classpath to @eed3si9n who is currently refactoring that part of SBT. I also identified the SBT/Scalacheck problem in forked mode, I'll try to fix that to unblock this PR. |
|
I opened sbt/sbt#4009. Hopefully this captures what's going on. |
ee06025 to
e55bbd7
Compare
|
Cherry-picked #6440 (submitted to 2.12.x) into this PR to unblock. Marking as WIP because of the commits from in-flight PRs here, the actual code in this PR is ready for review. |
e55bbd7 to
38a1e21
Compare
|
The tripwire I set in 00d1a5a has triggered for some tests (mostly around toolbox compilation, I think). So there are a few more problematic uses of fresh names that need attention. |
5f4178d to
d8ff23e
Compare
Fresh names are created using a FreshNameCreator, which appends
an increasing number to the given prefix.
```
scala> val fresh = new scala.reflect.internal.util.FreshNameCreator()
fresh: scala.reflect.internal.util.FreshNameCreator = scala.reflect.internal.util.FreshNameCreator@42b84286
scala> List("foo$", "bar$", "foo$").map(fresh.newName(_))
res1: List[String] = List(foo$1, bar$1, foo$2)
```
Each compilation unit had its own fresh name creator, which is used
in the regular compiler. Macros and quasiquotes make use of a global
creator (at least, as of scala#3401).
Both of these are too broadly scoped to help achieve deterministic
fresh names: if sources are recompiled in a different order or separately
recompiled, the fresh name counters can be different. Methods in a given
compilation unit are not necessarily typechecked in a linear fashion;
they might be typechecked ahead of time to provide an inferred type
to a caller.
This commit:
- Changes all known fresh name creations within the typer phase (in which
out-of-order typechecking is a factor) to use a fineer grained fresh
name creator. How fine grained? A fresh name generated as some position
`p` shares the fresh name generator scoped at the closest method or
class that encloses that the outermost enclosing tree at the same
position. This definition is designed to give a shared fresh name
creator for all fresh names generated in `macro1(macro2())`, even if
the fresh names are requiested from with a Typer in the macro enclosed
by a synthetic method.
- Changes macro fresh names to use the same fresh naming scheme as the regular
typechecker. An opt-out compiler option allows the old behaviour, but I'm
interested to find real-world cases where the new scheme actually causes
a problem
In addition, a small change is made to lambda lift to lift local methods in the
order that they are encountered during traversal, rather than sorting them
based on `Symbol.isLess` (which include `Symbol.id`, an order-of-typechecking
dependent value).
I've used this to flush out the corner cases fixed in the previous commit.
7ab0f1e to
c5cc71f
Compare
|
Yes, I already did the changes for |
6d0f950 to
6e40f4f
Compare
Conflicts: test/junit/scala/SerializationStabilityTest.scala
6e40f4f to
20858b3
Compare
|
The build failure is just the macro annotation binary incompat on Travis for the second-to-last commit. It can be safely ignored. |
adriaanm
left a comment
There was a problem hiding this comment.
(Hmm, looks like I got interrupted before submitting my review :-/)
| def macroExpand(typer: Typer, expandee: Tree, mode: Mode, pt: Type): Tree = { | ||
| // By default, use the current typer's fresh name creator in macros. The compiler option | ||
| // allows people to opt in to the old behaviour of Scala 2.12, which used a global fresh creator. | ||
| if (!settings.YmacroFresh.value) currentFreshNameCreator = typer.fresh |
There was a problem hiding this comment.
could we use -Xsource:2.12 instead?
| def nextEnclosing(p: Context => Boolean): Context = | ||
| if (p(this)) this else outer.nextEnclosing(p) | ||
|
|
||
| final def outermostContextAtCurrentPos: Context = { |
There was a problem hiding this comment.
needs a comment, can be taken from PR description (what does it compute, and why). Avoid having this drift, co-opted for other incompatible purposes.
| @@ -0,0 +1,228 @@ | |||
| package scala.tools.nsc | |||
There was a problem hiding this comment.
💯 awesome way to test stability, but it could really use a comment
|
I’ll submit a follow up to address those comments.
|
|
Thanks! |
|
I don’t think Xsource make sense here. The option is to give an escape
hatch in case of a bug. But this isn’t a change to the language (like
overloading or inference), but a change to a detail of code generation that
unfortunately might be observable when we have change maybe compiler
plugins and macros around.
|
|
*this option
…On Thu, 7 Jun 2018 at 21:28, Jason Zaugg ***@***.***> wrote:
I don’t think Xsource make sense here. The option is to give an escape
hatch in case of a bug. But this isn’t a change to the language (like
overloading or inference), but a change to a detail of code generation that
unfortunately might be observable when we have change maybe compiler
plugins and macros around.
|
|
This looks great! We compiled Akka 2.5.22 with Scala 2.13.0-M5 (which contains this improvement). Here I see a difference between our published artifact and my (independent) local build that suggests there might be more sources of nondeterminism in this area. The diffoscope report can be found at https://arnout.engelen.eu/rb/akka/2.13.0-M5/2.5.22/reproducible-builds-diffoscope-output-akka-actor_2.13.0-M5-2.5.22.jar/index.html - for example see |
I tracked the change down and it was due to a compiler plugin, scalac seems fine! Sorry for the noise and great work here 😄 |
|
Great! Thanks for investigating. |
Context bounds are desugared into term parameters `evidence$N` and before this commit, the `N` was chosen to be unique in the current compilation unit. This isn't great because it means that adding a new definition with a context bound in the middle of a file would change the desugaring of subsequent definitions in the same file. Even worse, when using incremental compilation we could end up with the same context bound desugared with a different value of `N` on different compilation runs because the order in which a compilation unit is traversed during Typer is not fixed but depends on the how the units that are jointly compiled depend on each other (as demonstrated by the `stable-ctx-bounds` test). This issue affects all fresh names generated during Typer, but it is especially problematic for context bounds because they're part of the API and renaming a method parameter forces the recompilation of all files calling that method. To fix this, we now only require context bounds parameters to have unique names among all the parameters of the method. This matches how we already desugar `def foo(using A, B)` into `def foo(x$1: A, x$2: B)` regardless of the context. Note that fresh names used in other situations are still problematic for deterministic compilation, most of the time they're not part of the API checked by Zinc, but they they can still lead to overcompilation if they appear in an inline def since the entire body of the inline def constitutes its API. In the future, we should follow Scala 2 lead and only require names to be fresh at the method level: scala/scala#6300 (The Scala 2 logic is slightly more complex to handle macros, but I don't think that applies to Scala 3 macros), see scala#7661. Fixes scala#18080.
Context bounds are desugared into term parameters `evidence$N` and before this commit, the `N` was chosen to be unique in the current compilation unit. This isn't great because it means that adding a new definition with a context bound in the middle of a file would change the desugaring of subsequent definitions in the same file. Even worse, when using incremental compilation we could end up with the same context bound desugared with a different value of `N` on different compilation runs because the order in which a compilation unit is traversed during Typer is not fixed but depends on the how the units that are jointly compiled depend on each other (as demonstrated by the `stable-ctx-bounds` test). This issue affects all fresh names generated during Typer, but it is especially problematic for context bounds because they're part of the API and renaming a method parameter forces the recompilation of all files calling that method. To fix this, we now only require context bounds parameters to have unique names among all the parameters of the method. This matches how we already desugar `def foo(using A, B)` into `def foo(using x$1: A, x$2: B)` regardless of the context. Note that fresh names used in other situations are still problematic for deterministic compilation. Most of the time they're not part of the API checked by Zinc, but they can still lead to overcompilation if they appear in an `inline def` since the entire body of the `inline def` constitutes its API. In the future, we should follow Scala 2's lead and only require names to be fresh at the method level: scala/scala#6300 (The Scala 2 logic is slightly more complex to handle macros, but I don't think that applies to Scala 3 macros), see scala#7661. Fixes scala#18080.
Context bounds are desugared into term parameters `evidence$N` and before this commit, the `N` was chosen to be unique in the current compilation unit. This isn't great because it means that adding a new definition with a context bound in the middle of a file would change the desugaring of subsequent definitions in the same file. Even worse, when using incremental compilation we could end up with the same context bound desugared with a different value of `N` on different compilation runs because the order in which a compilation unit is traversed during Typer is not fixed but depends on the how the units that are jointly compiled depend on each other (as demonstrated by the `stable-ctx-bounds` test). This issue affects all fresh names generated during Typer, but it is especially problematic for context bounds because they're part of the API and renaming a method parameter forces the recompilation of all files calling that method. To fix this, we now only require context bounds parameters to have unique names among all the parameters of the method. This matches how we already desugar `def foo(using A, B)` into `def foo(using x$1: A, x$2: B)` regardless of the context. Note that fresh names used in other situations are still problematic for deterministic compilation. Most of the time they're not part of the API checked by Zinc, but they can still lead to overcompilation if they appear in an `inline def` since the entire body of the `inline def` constitutes its API. In the future, we should follow Scala 2's lead and only require names to be fresh at the method level: scala/scala#6300 (The Scala 2 logic is slightly more complex to handle macros, but I don't think that applies to Scala 3 macros), see scala#7661. Fixes scala#18080.
Context bounds are desugared into term parameters `evidence$N` and before this commit, the `N` was chosen to be unique in the current compilation unit. This isn't great because it means that adding a new definition with a context bound in the middle of a file would change the desugaring of subsequent definitions in the same file. Even worse, when using incremental compilation we could end up with the same context bound desugared with a different value of `N` on different compilation runs because the order in which a compilation unit is traversed during Typer is not fixed but depends on the how the units that are jointly compiled depend on each other (as demonstrated by the `stable-ctx-bounds` test). This issue affects all fresh names generated during Typer, but it is especially problematic for context bounds because they're part of the API and renaming a method parameter forces the recompilation of all files calling that method. To fix this, we now only require context bounds parameters to have unique names among all the parameters of the method. This matches how we already desugar `def foo(using A, B)` into `def foo(using x$1: A, x$2: B)` regardless of the context. Note that fresh names used in other situations are still problematic for deterministic compilation. Most of the time they're not part of the API checked by Zinc, but they can still lead to overcompilation if they appear in an `inline def` since the entire body of the `inline def` constitutes its API. In the future, we should follow Scala 2's lead and only require names to be fresh at the method level: scala/scala#6300 (The Scala 2 logic is slightly more complex to handle macros, but I don't think that applies to Scala 3 macros), see scala#7661. Fixes scala#18080.
Context bounds are desugared into term parameters `evidence$N` and before this commit, the `N` was chosen to be unique in the current compilation unit. This isn't great because it means that adding a new definition with a context bound in the middle of a file would change the desugaring of subsequent definitions in the same file. Even worse, when using incremental compilation we could end up with the same context bound desugared with a different value of `N` on different compilation runs because the order in which a compilation unit is traversed during Typer is not fixed but depends on the how the units that are jointly compiled depend on each other (as demonstrated by the `stable-ctx-bounds` test). This issue affects all fresh names generated during Typer, but it is especially problematic for context bounds because they're part of the API and renaming a method parameter forces the recompilation of all files calling that method. To fix this, we now only require context bounds parameters to have unique names among all the parameters of the method. This matches how we already desugar `def foo(using A, B)` into `def foo(using x$1: A, x$2: B)` regardless of the context. Note that fresh names used in other situations are still problematic for deterministic compilation. Most of the time they're not part of the API checked by Zinc, but they can still lead to overcompilation if they appear in an `inline def` since the entire body of the `inline def` constitutes its API. In the future, we should follow Scala 2's lead and only require names to be fresh at the method level: scala/scala#6300 (The Scala 2 logic is slightly more complex to handle macros, but I don't think that applies to Scala 3 macros), see #7661. Fixes #18080.
Context bounds are desugared into term parameters `evidence$N` and before this commit, the `N` was chosen to be unique in the current compilation unit. This isn't great because it means that adding a new definition with a context bound in the middle of a file would change the desugaring of subsequent definitions in the same file. Even worse, when using incremental compilation we could end up with the same context bound desugared with a different value of `N` on different compilation runs because the order in which a compilation unit is traversed during Typer is not fixed but depends on the how the units that are jointly compiled depend on each other (as demonstrated by the `stable-ctx-bounds` test). This issue affects all fresh names generated during Typer, but it is especially problematic for context bounds because they're part of the API and renaming a method parameter forces the recompilation of all files calling that method. To fix this, we now only require context bounds parameters to have unique names among all the parameters of the method. This matches how we already desugar `def foo(using A, B)` into `def foo(using x$1: A, x$2: B)` regardless of the context. Note that fresh names used in other situations are still problematic for deterministic compilation. Most of the time they're not part of the API checked by Zinc, but they can still lead to overcompilation if they appear in an `inline def` since the entire body of the `inline def` constitutes its API. In the future, we should follow Scala 2's lead and only require names to be fresh at the method level: scala/scala#6300 (The Scala 2 logic is slightly more complex to handle macros, but I don't think that applies to Scala 3 macros), see #7661. Fixes #18080. [Cherry-picked f322b7b]
Fresh names are created using a FreshNameCreator, which appends
an increasing number to the given prefix.
Each compilation unit had its own fresh name creator, which is used
in the regular compiler. Macros and quasiquotes make use of a global
creator (at least, as of #3401).
Both of these are too broadly scoped to help achieve deterministic
fresh names: if sources are recompiled in a different order or separately
recompiled, the fresh name counters can be different. Methods in a given
compilation unit are not necessarily typechecked in a linear fashion;
they might be typechecked ahead of time to provide an inferred type
to a caller.
This commit:
out-of-order typechecking is a factor) to use a fineer grained fresh
name creator. How fine grained? A fresh name generated as some position
pshares the fresh name generator scoped at the closest method orclass that encloses that the outermost enclosing tree at the same
position. This definition is designed to give a shared fresh name
creator for all fresh names generated in
macro1(macro2()), even ifthe fresh names are requiested from with a Typer in the macro enclosed
by a synthetic method.
typechecker. An opt-out compiler option allows the old behaviour, but I'm
interested to find real-world cases where the new scheme actually causes
a problem
In addition, a small change is made to lambda lift to lift local methods in the
order that they are encountered during traversal, rather than sorting them
based on
Symbol.isLess(which includeSymbol.id, an order-of-typecheckingdependent value).
References scala/scala-dev#405