Skip to content

Test failure: test_runtime_backend_errors_handled[\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors import NotFound\nbackend = RuntimeBackend()\ntry:\n backend.execute("SELECT * FROM default.__RANDOM__")\n return "FAILED"\nexcept NotFound as e:\n return "PASSED"\n] #325

@github-actions

Description

@github-actions
❌ test_runtime_backend_errors_handled[\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors import NotFound\nbackend = RuntimeBackend()\ntry:\n backend.execute("SELECT * FROM default.__RANDOM__")\n return "FAILED"\nexcept NotFound as e:\n return "PASSED"\n]: assert '{"ts": "2024...]}}\n"PASSED"' == 'PASSED' (21.938s)
assert '{"ts": "2024...]}}\n"PASSED"' == 'PASSED'
  
  + {"ts": "2024-11-15 13:55:06,659", "level": "ERROR", "logger": "SQLQueryContextLogger", "msg": "[TABLE_OR_VIEW_NOT_FOUND] The table or view `TEST_SCHEMA`.`__RANDOM__` cannot be found. Verify the spelling and correctness of the schema and catalog.\nIf you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.\nTo tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS. SQLSTATE: 42P01", "context": {"error_class": "TABLE_OR_VIEW_NOT_FOUND"}, "exception": {"class": "Py4JJavaError", "msg": "An error occurred while calling o389.sql.\n: org.apache.spark.sql.catalyst.ExtendedAnalysisException: [TABLE_OR_VIEW_NOT_FOUND] The table or view `TEST_SCHEMA`.`__RANDOM__` cannot be found. Verify the spelling and correctness of the schema and catalog.\nIf you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.\nTo tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS. SQLSTATE: 42P01; line 1 pos 14;\n'Project [*]\n+- 'UnresolvedRelation [TEST_SCHEMA, __RANDOM__], [], false\n\n\tat org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.tableNotFound(package.scala:90)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2(CheckAnalysis.scala:258)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2$adapted(CheckAnalysis.scala:231)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:287)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:286)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:286)\n\tat scala.collection.Iterator.foreach(Iterator.scala:943)\n\tat scala.collection.Iterator.foreach$(Iterator.scala:943)\n\tat scala.collection.AbstractIterator.foreach(Iterator.scala:1431)\n\tat scala.collection.IterableLike.foreach(IterableLike.scala:74)\n\tat scala.collection.IterableLike.foreach$(IterableLike.scala:73)\n\tat scala.collection.AbstractIterable.foreach(Iterable.scala:56)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:286)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:231)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:213)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:388)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:198)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:185)\n\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:185)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:388)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$2(Analyzer.scala:443)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:193)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:443)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:443)\n\tat org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:440)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:264)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:472)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$5(QueryExecution.scala:562)\n\tat org.apache.spark.sql.execution.SQLExecution$.withExecutionPhase(SQLExecution.scala:144)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$4(QueryExecution.scala:562)\n\tat org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1125)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:561)\n\tat com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)\n\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:557)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)\n\tat org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:557)\n\tat org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:258)\n\tat org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:257)\n\tat org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:239)\n\tat org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:131)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)\n\tat org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1280)\n\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)\n\tat org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1280)\n\tat org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:123)\n\tat org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:969)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)\n\tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:933)\n\tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:992)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)\n\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.base/java.lang.reflect.Method.invoke(Method.java:568)\n\tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)\n\tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:397)\n\tat py4j.Gateway.invoke(Gateway.java:306)\n\tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)\n\tat py4j.commands.CallCommand.execute(CallCommand.java:79)\n\tat py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:199)\n\tat py4j.ClientServerConnection.run(ClientServerConnection.java:119)\n\tat java.base/java.lang.Thread.run(Thread.java:840)\n", "stacktrace": ["Traceback (most recent call last):", "  File \"/databricks/spark/python/pyspark/errors/exceptions/captured.py\", line 263, in deco", "    return f(*a, **kw)", "           ^^^^^^^^^^^", "  File \"/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py\", line 326, in get_return_value", "    raise Py4JJavaError(", "py4j.protocol.Py4JJavaError: An error occurred while calling o389.sql.", ": org.apache.spark.sql.catalyst.ExtendedAnalysisException: [TABLE_OR_VIEW_NOT_FOUND] The table or view `TEST_SCHEMA`.`__RANDOM__` cannot be found. Verify the spelling and correctness of the schema and catalog.", "If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.", "To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS. SQLSTATE: 42P01; line 1 pos 14;", "'Project [*]", "+- 'UnresolvedRelation [TEST_SCHEMA, __RANDOM__], [], false", "", "\tat org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.tableNotFound(package.scala:90)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2(CheckAnalysis.scala:258)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2$adapted(CheckAnalysis.scala:231)", "\tat org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:287)", "\tat org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:286)", "\tat org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:286)", "\tat scala.collection.Iterator.foreach(Iterator.scala:943)", "\tat scala.collection.Iterator.foreach$(Iterator.scala:943)", "\tat scala.collection.AbstractIterator.foreach(Iterator.scala:1431)", "\tat scala.collection.IterableLike.foreach(IterableLike.scala:74)", "\tat scala.collection.IterableLike.foreach$(IterableLike.scala:73)", "\tat scala.collection.AbstractIterable.foreach(Iterable.scala:56)", "\tat org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:286)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:231)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:213)", "\tat org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:388)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:198)", "\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)", "\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:185)", "\tat org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:185)", "\tat org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:388)", "\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$2(Analyzer.scala:443)", "\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)", "\tat org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:193)", "\tat org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:443)", "\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:443)", "\tat org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:440)", "\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:264)", "\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)", "\tat org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:472)", "\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$5(QueryExecution.scala:562)", "\tat org.apache.spark.sql.execution.SQLExecution$.withExecutionPhase(SQLExecution.scala:144)", "\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$4(QueryExecution.scala:562)", "\tat org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1125)", "\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:561)", "\tat com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)", "\tat org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:557)", "\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)", "\tat org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:557)", "\tat org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:258)", "\tat org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:257)", "\tat org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:239)", "\tat org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:131)", "\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)", "\tat org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1280)", "\tat com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)", "\tat org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1280)", "\tat org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:123)", "\tat org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:969)", "\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1273)", "\tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:933)", "\tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:992)", "\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)", "\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)", "\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)", "\tat java.base/java.lang.reflect.Method.invoke(Method.java:568)", "\tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)", "\tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:397)", "\tat py4j.Gateway.invoke(Gateway.java:306)", "\tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)", "\tat py4j.commands.CallCommand.execute(CallCommand.java:79)", "\tat py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:199)", "\tat py4j.ClientServerConnection.run(ClientServerConnection.java:119)", "\tat java.base/java.lang.Thread.run(Thread.java:840)"]}}
  - PASSED
  + "PASSED"
  ? +      +
13:54 DEBUG [databricks.sdk] Loaded from environment
13:54 DEBUG [databricks.sdk] Ignoring pat auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Ignoring basic auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Attempting to configure auth: metadata-service
13:54 INFO [databricks.sdk] Using Databricks Metadata Service authentication
[gw2] linux -- Python 3.10.15 /home/runner/work/lsql/lsql/.venv/bin/python
13:54 DEBUG [databricks.sdk] Loaded from environment
13:54 DEBUG [databricks.sdk] Ignoring pat auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Ignoring basic auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Attempting to configure auth: metadata-service
13:54 INFO [databricks.sdk] Using Databricks Metadata Service authentication
13:54 DEBUG [databricks.sdk] GET /api/2.0/preview/scim/v2/Me
< 200 OK
< {
<   "active": true,
<   "displayName": "labs-runtime-identity",
<   "emails": [
<     {
<       "primary": true,
<       "type": "work",
<       "value": "**REDACTED**"
<     }
<   ],
<   "entitlements": [
<     {
<       "value": "**REDACTED**"
<     },
<     "... (1 additional elements)"
<   ],
<   "externalId": "d0f9bd2c-5651-45fd-b648-12a3fc6375c4",
<   "groups": [
<     {
<       "$ref": "Groups/300667344111082",
<       "display": "labs.scope.runtime",
<       "type": "direct",
<       "value": "**REDACTED**"
<     }
<   ],
<   "id": "4643477475987733",
<   "name": {
<     "givenName": "labs-runtime-identity"
<   },
<   "schemas": [
<     "urn:ietf:params:scim:schemas:core:2.0:User",
<     "... (1 additional elements)"
<   ],
<   "userName": "4106dc97-a963-48f0-a079-a578238959a6"
< }
13:54 DEBUG [databricks.labs.blueprint.wheels] Building wheel for /tmp/tmpe9z13g67/working-copy in /tmp/tmpe9z13g67
13:54 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/wheels/databricks_labs_lsql-0.13.1+320241115135446-py3-none-any.whl
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 404 Not Found
< {
<   "error_code": "RESOURCE_DOES_NOT_EXIST",
<   "message": "The parent folder (/Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/wheels) does not exist."
< }
13:54 DEBUG [databricks.labs.blueprint.installation] Creating missing folders: /Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/wheels
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/mkdirs
> {
>   "path": "/Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/wheels"
> }
< 200 OK
< {}
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935364
< }
13:54 DEBUG [databricks.labs.blueprint.installation] Converting Version into JSON format
13:54 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/version.json
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935368
< }
13:54 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_cores": 8.0,
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_memory_mb": 32768,
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver": {
<     "host_private_ip": "10.179.8.14",
<     "instance_id": "f335b24df03e466b8efb19a708cf7d9c",
<     "node_attributes": {
<       "is_spot": false
<     },
<     "node_id": "b993377df44e4a408921281be8db0393",
<     "private_ip": "10.179.10.14",
<     "public_dns": "",
<     "start_timestamp": 1731678282507
<   },
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "jdbc_port": 10000,
<   "last_activity_time": 1731678347945,
<   "last_restarted_time": 1731678323374,
<   "last_state_loss_time": 1731678323349,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 7133597207159756379,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "RUNNING",
<   "state_message": ""
< }
13:54 DEBUG [databricks.sdk] POST /api/1.2/contexts/create
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "688786901056936228"
< }
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=688786901056936228
< 200 OK
< {
<   "id": "688786901056936228",
<   "status": "Pending"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=688786901056936228: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~1s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=688786901056936228
< 200 OK
< {
<   "id": "688786901056936228",
<   "status": "Pending"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=688786901056936228: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~2s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=688786901056936228
< 200 OK
< {
<   "id": "688786901056936228",
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "get_ipython().run_line_magic('pip', 'install /Workspace/Users/4106dc97-a963-48f0-a079-a578238959... (110 more bytes)",
>   "contextId": "688786901056936228",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "3714a43b8ef74fb2acd90aff247140d5"
< }
13:54 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3714a43b8ef74fb2acd90aff247140d5&contextId=688786901056936228
< 200 OK
< {
<   "id": "3714a43b8ef74fb2acd90aff247140d5",
<   "results": null,
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=3714a43b8ef74fb2acd90aff247140d5, context_id=688786901056936228: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3714a43b8ef74fb2acd90aff247140d5&contextId=688786901056936228
< 200 OK
< {
<   "id": "3714a43b8ef74fb2acd90aff247140d5",
<   "results": null,
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=3714a43b8ef74fb2acd90aff247140d5, context_id=688786901056936228: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~2s)
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3714a43b8ef74fb2acd90aff247140d5&contextId=688786901056936228
< 200 OK
< {
<   "id": "3714a43b8ef74fb2acd90aff247140d5",
<   "results": null,
<   "status": "Running"
< }
13:55 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=3714a43b8ef74fb2acd90aff247140d5, context_id=688786901056936228: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~3s)
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3714a43b8ef74fb2acd90aff247140d5&contextId=688786901056936228
< 200 OK
< {
<   "id": "3714a43b8ef74fb2acd90aff247140d5",
<   "results": {
<     "data": "Processing /Workspace/Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/wheels/databricks_labs_ls... (3270 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:55 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "import json\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors ... (189 more bytes)",
>   "contextId": "688786901056936228",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "acd4f82cd89343a08ee6054571fa4401"
< }
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=acd4f82cd89343a08ee6054571fa4401&contextId=688786901056936228
< 200 OK
< {
<   "id": "acd4f82cd89343a08ee6054571fa4401",
<   "results": {
<     "data": "{\"ts\": \"2024-11-15 13:55:06,659\", \"level\": \"ERROR\", \"logger\": \"SQLQueryContextLogger\", \"msg\": \"[... (13306 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:55 WARNING [databricks.sdk] cannot parse converted return statement. Just returning text
Traceback (most recent call last):
  File "/home/runner/work/lsql/lsql/.venv/lib/python3.10/site-packages/databricks/labs/blueprint/commands.py", line 123, in run
    return json.loads(results.data)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/__init__.py", line 346, in loads
    return _TEST_SCHEMA_decoder.decode(s)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 13394)
13:54 DEBUG [databricks.sdk] Loaded from environment
13:54 DEBUG [databricks.sdk] Ignoring pat auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Ignoring basic auth, because metadata-service is preferred
13:54 DEBUG [databricks.sdk] Attempting to configure auth: metadata-service
13:54 INFO [databricks.sdk] Using Databricks Metadata Service authentication
13:54 DEBUG [databricks.sdk] GET /api/2.0/preview/scim/v2/Me
< 200 OK
< {
<   "active": true,
<   "displayName": "labs-runtime-identity",
<   "emails": [
<     {
<       "primary": true,
<       "type": "work",
<       "value": "**REDACTED**"
<     }
<   ],
<   "entitlements": [
<     {
<       "value": "**REDACTED**"
<     },
<     "... (1 additional elements)"
<   ],
<   "externalId": "d0f9bd2c-5651-45fd-b648-12a3fc6375c4",
<   "groups": [
<     {
<       "$ref": "Groups/300667344111082",
<       "display": "labs.scope.runtime",
<       "type": "direct",
<       "value": "**REDACTED**"
<     }
<   ],
<   "id": "4643477475987733",
<   "name": {
<     "givenName": "labs-runtime-identity"
<   },
<   "schemas": [
<     "urn:ietf:params:scim:schemas:core:2.0:User",
<     "... (1 additional elements)"
<   ],
<   "userName": "4106dc97-a963-48f0-a079-a578238959a6"
< }
13:54 DEBUG [databricks.labs.blueprint.wheels] Building wheel for /tmp/tmpe9z13g67/working-copy in /tmp/tmpe9z13g67
13:54 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/wheels/databricks_labs_lsql-0.13.1+320241115135446-py3-none-any.whl
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 404 Not Found
< {
<   "error_code": "RESOURCE_DOES_NOT_EXIST",
<   "message": "The parent folder (/Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/wheels) does not exist."
< }
13:54 DEBUG [databricks.labs.blueprint.installation] Creating missing folders: /Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/wheels
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/mkdirs
> {
>   "path": "/Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/wheels"
> }
< 200 OK
< {}
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935364
< }
13:54 DEBUG [databricks.labs.blueprint.installation] Converting Version into JSON format
13:54 DEBUG [databricks.labs.blueprint.installation] Uploading: /Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/version.json
13:54 DEBUG [databricks.sdk] POST /api/2.0/workspace/import
> [raw stream]
< 200 OK
< {
<   "object_id": 804190547935368
< }
13:54 DEBUG [databricks.sdk] GET /api/2.1/clusters/get?cluster_id=DATABRICKS_CLUSTER_ID
< 200 OK
< {
<   "autotermination_minutes": 60,
<   "CLOUD_ENV_attributes": {
<     "availability": "SPOT_WITH_FALLBACK_AZURE",
<     "first_on_demand": 2147483647,
<     "spot_bid_max_price": -1.0
<   },
<   "cluster_cores": 8.0,
<   "cluster_id": "DATABRICKS_CLUSTER_ID",
<   "cluster_memory_mb": 32768,
<   "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<   "cluster_source": "UI",
<   "creator_user_name": "[email protected]",
<   "custom_tags": {
<     "ResourceClass": "SingleNode"
<   },
<   "data_security_mode": "SINGLE_USER",
<   "TEST_SCHEMA_tags": {
<     "Budget": "opex.sales.labs",
<     "ClusterId": "DATABRICKS_CLUSTER_ID",
<     "ClusterName": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "Creator": "[email protected]",
<     "DatabricksInstanceGroupId": "-8854613105865987054",
<     "DatabricksInstancePoolCreatorId": "4183391249163402",
<     "DatabricksInstancePoolId": "TEST_INSTANCE_POOL_ID",
<     "Owner": "[email protected]",
<     "Vendor": "Databricks"
<   },
<   "disk_spec": {},
<   "driver": {
<     "host_private_ip": "10.179.8.14",
<     "instance_id": "f335b24df03e466b8efb19a708cf7d9c",
<     "node_attributes": {
<       "is_spot": false
<     },
<     "node_id": "b993377df44e4a408921281be8db0393",
<     "private_ip": "10.179.10.14",
<     "public_dns": "",
<     "start_timestamp": 1731678282507
<   },
<   "driver_healthy": true,
<   "driver_instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "driver_instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "driver_node_type_id": "Standard_D8as_v4",
<   "effective_spark_version": "16.0.x-scala2.12",
<   "enable_elastic_disk": true,
<   "enable_local_disk_encryption": false,
<   "init_scripts_safe_mode": false,
<   "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<   "instance_source": {
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID"
<   },
<   "jdbc_port": 10000,
<   "last_activity_time": 1731678347945,
<   "last_restarted_time": 1731678323374,
<   "last_state_loss_time": 1731678323349,
<   "node_type_id": "Standard_D8as_v4",
<   "num_workers": 0,
<   "pinned_by_user_name": "4183391249163402",
<   "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<   "spark_conf": {
<     "spark.databricks.cluster.profile": "singleNode",
<     "spark.master": "local[*]"
<   },
<   "spark_context_id": 7133597207159756379,
<   "spark_version": "16.0.x-scala2.12",
<   "spec": {
<     "autotermination_minutes": 60,
<     "cluster_name": "Scoped MSI Cluster: runtime (Single Node, Single User)",
<     "custom_tags": {
<       "ResourceClass": "SingleNode"
<     },
<     "data_security_mode": "SINGLE_USER",
<     "instance_pool_id": "TEST_INSTANCE_POOL_ID",
<     "num_workers": 0,
<     "single_user_name": "4106dc97-a963-48f0-a079-a578238959a6",
<     "spark_conf": {
<       "spark.databricks.cluster.profile": "singleNode",
<       "spark.master": "local[*]"
<     },
<     "spark_version": "16.0.x-scala2.12"
<   },
<   "start_time": 1731598210709,
<   "state": "RUNNING",
<   "state_message": ""
< }
13:54 DEBUG [databricks.sdk] POST /api/1.2/contexts/create
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "688786901056936228"
< }
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=688786901056936228
< 200 OK
< {
<   "id": "688786901056936228",
<   "status": "Pending"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=688786901056936228: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~1s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=688786901056936228
< 200 OK
< {
<   "id": "688786901056936228",
<   "status": "Pending"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, context_id=688786901056936228: (ContextStatus.PENDING) current status: ContextStatus.PENDING (sleeping ~2s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/contexts/status?clusterId=DATABRICKS_CLUSTER_ID&contextId=688786901056936228
< 200 OK
< {
<   "id": "688786901056936228",
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "get_ipython().run_line_magic('pip', 'install /Workspace/Users/4106dc97-a963-48f0-a079-a578238959... (110 more bytes)",
>   "contextId": "688786901056936228",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "3714a43b8ef74fb2acd90aff247140d5"
< }
13:54 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3714a43b8ef74fb2acd90aff247140d5&contextId=688786901056936228
< 200 OK
< {
<   "id": "3714a43b8ef74fb2acd90aff247140d5",
<   "results": null,
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=3714a43b8ef74fb2acd90aff247140d5, context_id=688786901056936228: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~1s)
13:54 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3714a43b8ef74fb2acd90aff247140d5&contextId=688786901056936228
< 200 OK
< {
<   "id": "3714a43b8ef74fb2acd90aff247140d5",
<   "results": null,
<   "status": "Running"
< }
13:54 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=3714a43b8ef74fb2acd90aff247140d5, context_id=688786901056936228: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~2s)
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3714a43b8ef74fb2acd90aff247140d5&contextId=688786901056936228
< 200 OK
< {
<   "id": "3714a43b8ef74fb2acd90aff247140d5",
<   "results": null,
<   "status": "Running"
< }
13:55 DEBUG [databricks.sdk] cluster_id=DATABRICKS_CLUSTER_ID, command_id=3714a43b8ef74fb2acd90aff247140d5, context_id=688786901056936228: (CommandStatus.RUNNING) current status: CommandStatus.RUNNING (sleeping ~3s)
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=3714a43b8ef74fb2acd90aff247140d5&contextId=688786901056936228
< 200 OK
< {
<   "id": "3714a43b8ef74fb2acd90aff247140d5",
<   "results": {
<     "data": "Processing /Workspace/Users/4106dc97-a963-48f0-a079-a578238959a6/.6CQg/wheels/databricks_labs_ls... (3270 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:55 DEBUG [databricks.sdk] POST /api/1.2/commands/execute
> {
>   "clusterId": "DATABRICKS_CLUSTER_ID",
>   "command": "import json\nfrom databricks.labs.lsql.backends import RuntimeBackend\nfrom databricks.sdk.errors ... (189 more bytes)",
>   "contextId": "688786901056936228",
>   "language": "python"
> }
< 200 OK
< {
<   "id": "acd4f82cd89343a08ee6054571fa4401"
< }
13:55 DEBUG [databricks.sdk] GET /api/1.2/commands/status?clusterId=DATABRICKS_CLUSTER_ID&commandId=acd4f82cd89343a08ee6054571fa4401&contextId=688786901056936228
< 200 OK
< {
<   "id": "acd4f82cd89343a08ee6054571fa4401",
<   "results": {
<     "data": "{\"ts\": \"2024-11-15 13:55:06,659\", \"level\": \"ERROR\", \"logger\": \"SQLQueryContextLogger\", \"msg\": \"[... (13306 more bytes)",
<     "resultType": "text"
<   },
<   "status": "Finished"
< }
13:55 WARNING [databricks.sdk] cannot parse converted return statement. Just returning text
Traceback (most recent call last):
  File "/home/runner/work/lsql/lsql/.venv/lib/python3.10/site-packages/databricks/labs/blueprint/commands.py", line 123, in run
    return json.loads(results.data)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/__init__.py", line 346, in loads
    return _TEST_SCHEMA_decoder.decode(s)
  File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/json/decoder.py", line 340, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 13394)
[gw2] linux -- Python 3.10.15 /home/runner/work/lsql/lsql/.venv/bin/python

Running from nightly #1

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions