Skip to content

"The Python kernel is unresponsive" when fitting a reasonable sized sparse matrix into NearestNeighbors #31059

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
fabienarnaud opened this issue Mar 24, 2025 · 6 comments

Comments

@fabienarnaud
Copy link

fabienarnaud commented Mar 24, 2025

Describe the bug

Hi all,

I have a python code that has been running every day for the past years, which uses NearestNeighbors to find best matches.
All of a sudden, in both our TEST and PRD environments, our code has been crashing on the NearestNeighbors function with the following message: "The Python kernel is unresponsive". This started last Friday 21st of March 2025.

What puzzles me is that we haven't made any modifications to our code, the data hasn't changed (at least in our TEST environment) and we didn't change the version of scikit-learn.
The exact command that throws the error is:

nbrs = NearestNeighbors(n_neighbors = 1, metric = 'cosine').fit(X)

where X is a sparse matrix compressed to sparse rows that contains 38506x53709 elements.

We run the code on Databricks (runtime 15.4LTS, where scikit-learn is on 1.3.0).
I also tried with scikit-learn 1.4.2 (preinstalled in Databricks runtime 16.2) but had the same issue.

The error suggests a memory issue, but I'm struggling to understand why this would happen now while the context is exactly the same as what it was before. Furthermore, we use the same code with the same Databricks cluster for another data set which is at least 6x bigger and that one runs successfully in just a few seconds.

I'm not a data scientist and therefore quite confused as to why this would no longer run. Since our environment didn't change, I was wondering if anything would have changed in respect to scikit-learn v1.3.0 for any odd reason, or if you heard anything similar recently from some other user(s)?

Steps/Code to Reproduce

from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.neighbors import NearestNeighbors

df_table = df_table.toPandas().add_prefix("b.")
vectorizer = TfidfVectorizer(analyzer = 'char', ngram_range = (1, 4))
X = vectorizer.fit_transform(df_table['b.concat_match_col'].values.astype('U'))
nbrs = NearestNeighbors(n_neighbors = 1, metric = 'cosine').fit(X)

# df_table['b.concat_match_col'] is a pandas dataframe that contains 7 columns
# and 38506 rows. I can't place the whole code to build the dataframe here
# because it's quite long

Expected Results

No error should be thrown

Actual Results

The NearestNeighbor function now returns this:

Image

Versions

System:
    python: 3.11.0rc1 (main, Aug 12 2022, 10:02:14) [GCC 11.2.0]
executable: /local_disk0/.ephemeral_nfs/envs/pythonEnv-de91d56e-aaea-4084-9eca-24ddcb3a19d4/bin/python
   machine: Linux-5.15.0-1078-azure-x86_64-with-glibc2.35

Python dependencies:
      sklearn: 1.3.0
          pip: 23.2.1
   setuptools: 68.0.0
        numpy: 1.23.5
        scipy: 1.11.1
       Cython: 0.29.32
       pandas: 1.5.3
   matplotlib: 3.7.2
       joblib: 1.2.0
threadpoolctl: 2.2.0

Built with OpenMP: True
Exception ignored on calling ctypes callback function: <function _ThreadpoolInfo._find_modules_with_dl_iterate_phdr.<locals>.match_module_callback at 0x7f407fbc1b20>
Traceback (most recent call last):
  File "/databricks/python/lib/python3.11/site-packages/threadpoolctl.py", line 400, in match_module_callback
    self._make_module_from_path(filepath)
  File "/databricks/python/lib/python3.11/site-packages/threadpoolctl.py", line 515, in _make_module_from_path
    module = module_class(filepath, prefix, user_api, internal_api)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/databricks/python/lib/python3.11/site-packages/threadpoolctl.py", line 606, in __init__
    self.version = self.get_version()
                   ^^^^^^^^^^^^^^^^^^
  File "/databricks/python/lib/python3.11/site-packages/threadpoolctl.py", line 646, in get_version
    config = get_config().split()
             ^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'split'

threadpoolctl info:
       filepath: /databricks/python3/lib/python3.11/site-packages/scipy.libs/libopenblasp-r0-23e5df77.3.21.dev.so
         prefix: libopenblas
       user_api: blas
   internal_api: openblas
        version: 0.3.21.dev
    num_threads: 8
threading_layer: pthreads
   architecture: Zen

       filepath: /databricks/python3/lib/python3.11/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0
         prefix: libgomp
       user_api: openmp
   internal_api: openmp
        version: None
    num_threads: 8
@fabienarnaud fabienarnaud added Bug Needs Triage Issue requires triage labels Mar 24, 2025
@sir-rip
Copy link

sir-rip commented Mar 24, 2025

Hey @fabienarnaud,

It looks like the issue might be related to OpenMP threading conflicts or an unstable OpenBLAS version.

@betatim
Copy link
Member

betatim commented Mar 24, 2025

Thanks for reporting this. I think "Kernel not responsive" means that you are running your code in a Jupyter notebook or something similar to that. It isn't a error message from scikit-learn.

I recommend you take this up with databricks support. Once a version of scikit-learn is released it does not change retroactively.

@fabienarnaud
Copy link
Author

Thank you both for your quick response! I will report this back to Databricks

@glemaitre glemaitre added Needs Info and removed Needs Triage Issue requires triage labels Mar 25, 2025
@glemaitre
Copy link
Member

I know that we introduced some improvement in pairwise distance but it was in 1.2.
It would indeed but useful to get the traceback to be able to have an actionable here. I'm not closing this issue but I'm adding the tag "Needs info". @fabienarnaud Feel free to report any additional information that could be useful.

@fabienarnaud
Copy link
Author

fabienarnaud commented Mar 25, 2025

Hi glemaitre,

I tried to collect info with traceback but as the Databricks notebook crashes, my code never gets to print the traces from the exception handler.

I'm pasting below the Log4j output of the Databricks cluster, in case it can help.

25/03/25 14:28:24 WARN FairSchedulableBuilder: A job was submitted with scheduler pool 1742912591363, which has not been configured. This can happen when the file that pools are read from isn't set, or when that file doesn't contain 1742912591363. Created 1742912591363 with default configuration (schedulingMode: FIFO, minShare: 0, weight: 1
25/03/25 14:28:24 INFO FairSchedulableBuilder: Added task set TaskSet_9.0 tasks to pool 1742912591363
25/03/25 14:28:24 INFO TaskSetManager: Starting task 0.0 in stage 9.0 (TID 6) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:24 INFO DataSourceFactory$: DataSource Jdbc URL: jdbc:mariadb://consolidated-westeurope-prod-metastore-addl-3.mysql.database.azure.com:3306/organization871544486122877?useSSL=true&sslMode=VERIFY_CA&disableSslHostnameVerification=true&trustServerCertificate=false&serverSslCert=/databricks/common/mysql-ssl-ca-cert.crt
25/03/25 14:28:24 INFO HikariDataSource: metastore-monitor - Starting...
25/03/25 14:28:24 INFO HikariDataSource: metastore-monitor - Start completed.
25/03/25 14:28:24 INFO TaskSetManager: Finished task 0.0 in stage 9.0 (TID 6) in 80 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:24 INFO TaskSchedulerImpl: Removed TaskSet 9.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:24 INFO DAGScheduler: ShuffleMapStage 9 (mapPartitionsInternal at PhotonExec.scala:541) finished in 0.098 s
25/03/25 14:28:24 INFO DAGScheduler: looking for newly runnable stages
25/03/25 14:28:24 INFO DAGScheduler: running: Set()
25/03/25 14:28:24 INFO DAGScheduler: waiting: Set()
25/03/25 14:28:24 INFO DAGScheduler: failed: Set()
25/03/25 14:28:24 INFO SparkContext: Starting job: $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63
25/03/25 14:28:24 INFO DAGScheduler: Got job 7 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) with 1 output partitions
25/03/25 14:28:24 INFO DAGScheduler: Final stage: ResultStage 11 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63)
25/03/25 14:28:24 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 10)
25/03/25 14:28:24 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:24 INFO DAGScheduler: Submitting ResultStage 11 (MapPartitionsRDD[64] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63), which has no missing parents
25/03/25 14:28:24 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 11 (MapPartitionsRDD[64] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:24 INFO TaskSchedulerImpl: Adding task set 11.0 with 1 tasks resource profile 0
25/03/25 14:28:24 INFO TaskSetManager: TaskSet 11.0 using PreferredLocationsV1
25/03/25 14:28:24 INFO FairSchedulableBuilder: Added task set TaskSet_11.0 tasks to pool 1742912591363
25/03/25 14:28:24 INFO TaskSetManager: Starting task 0.0 in stage 11.0 (TID 7) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:24 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 3 to 10.139.64.14:45590
25/03/25 14:28:24 INFO TaskSetManager: Finished task 0.0 in stage 11.0 (TID 7) in 33 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:24 INFO TaskSchedulerImpl: Removed TaskSet 11.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:24 INFO DAGScheduler: ResultStage 11 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) finished in 0.059 s
25/03/25 14:28:24 INFO DAGScheduler: Job 7 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:24 INFO TaskSchedulerImpl: Cancelling stage 11
25/03/25 14:28:24 INFO TaskSchedulerImpl: Killing all running tasks in stage 11: Stage finished
25/03/25 14:28:24 INFO DAGScheduler: Job 7 finished: $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63, took 0.061878 s
25/03/25 14:28:25 INFO ClusterLoadMonitor: Removed query with execution ID:5. Current active queries:0
25/03/25 14:28:25 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:25 INFO QueryProfileListener: Query profile sent to logger, seq number: 5, app id: app-20250325142329-0000
25/03/25 14:28:25 INFO HikariDataSource: metastore-monitor - Shutdown initiated...
25/03/25 14:28:25 INFO HikariDataSource: metastore-monitor - Shutdown completed.
25/03/25 14:28:25 INFO MetastoreMonitor: Metastore healthcheck successful (connection duration = 555 milliseconds)
25/03/25 14:28:25 INFO ClusterLoadMonitor: Added query with execution ID:6. Current active queries:1
25/03/25 14:28:25 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:25 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 18, computed in 0 ms.
25/03/25 14:28:25 INFO CodeGenerator: Code generated in 37.663959 ms
25/03/25 14:28:25 INFO CodeGenerator: Code generated in 17.923099 ms
25/03/25 14:28:25 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:95
25/03/25 14:28:25 INFO DAGScheduler: Got job 8 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:95) with 1 output partitions
25/03/25 14:28:25 INFO DAGScheduler: Final stage: ResultStage 12 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:95)
25/03/25 14:28:25 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:25 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:25 INFO DAGScheduler: Submitting ResultStage 12 (MapPartitionsRDD[71] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:95), which has no missing parents
25/03/25 14:28:25 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 12 (MapPartitionsRDD[71] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:95) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:25 INFO TaskSchedulerImpl: Adding task set 12.0 with 1 tasks resource profile 0
25/03/25 14:28:25 INFO TaskSetManager: TaskSet 12.0 using PreferredLocationsV1
25/03/25 14:28:25 INFO FairSchedulableBuilder: Added task set TaskSet_12.0 tasks to pool 1742912591363
25/03/25 14:28:25 INFO TaskSetManager: Starting task 0.0 in stage 12.0 (TID 8) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:26 INFO TaskSetManager: Finished task 0.0 in stage 12.0 (TID 8) in 446 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:26 INFO TaskSchedulerImpl: Removed TaskSet 12.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:26 INFO DAGScheduler: ResultStage 12 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:95) finished in 0.469 s
25/03/25 14:28:26 INFO DAGScheduler: Job 8 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:26 INFO TaskSchedulerImpl: Cancelling stage 12
25/03/25 14:28:26 INFO TaskSchedulerImpl: Killing all running tasks in stage 12: Stage finished
25/03/25 14:28:26 INFO DAGScheduler: Job 8 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:95, took 0.470806 s
25/03/25 14:28:26 INFO ClusterLoadMonitor: Removed query with execution ID:6. Current active queries:0
25/03/25 14:28:26 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:26 INFO QueryProfileListener: Query profile sent to logger, seq number: 6, app id: app-20250325142329-0000
25/03/25 14:28:26 INFO ClusterLoadMonitor: Added query with execution ID:7. Current active queries:1
25/03/25 14:28:26 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:26 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 1, computed in 0 ms.
25/03/25 14:28:26 INFO CodeGenerator: Code generated in 16.76943 ms
25/03/25 14:28:26 INFO CodeGenerator: Code generated in 10.985203 ms
25/03/25 14:28:26 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:108
25/03/25 14:28:26 INFO DAGScheduler: Got job 9 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:108) with 1 output partitions
25/03/25 14:28:26 INFO DAGScheduler: Final stage: ResultStage 13 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:108)
25/03/25 14:28:26 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:26 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:26 INFO DAGScheduler: Submitting ResultStage 13 (MapPartitionsRDD[77] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:108), which has no missing parents
25/03/25 14:28:26 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 13 (MapPartitionsRDD[77] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:108) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:26 INFO TaskSchedulerImpl: Adding task set 13.0 with 1 tasks resource profile 0
25/03/25 14:28:26 INFO TaskSetManager: TaskSet 13.0 using PreferredLocationsV1
25/03/25 14:28:26 INFO FairSchedulableBuilder: Added task set TaskSet_13.0 tasks to pool 1742912591363
25/03/25 14:28:26 INFO TaskSetManager: Starting task 0.0 in stage 13.0 (TID 9) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:26 INFO ClusterLoadAvgHelper: Current cluster load: 1, Old Ema: 1.0, New Ema: 1.0 
25/03/25 14:28:27 INFO TaskSetManager: Finished task 0.0 in stage 13.0 (TID 9) in 655 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:28:27 INFO TaskSchedulerImpl: Removed TaskSet 13.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:27 INFO DAGScheduler: ResultStage 13 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:108) finished in 0.664 s
25/03/25 14:28:27 INFO DAGScheduler: Job 9 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:27 INFO TaskSchedulerImpl: Cancelling stage 13
25/03/25 14:28:27 INFO TaskSchedulerImpl: Killing all running tasks in stage 13: Stage finished
25/03/25 14:28:27 INFO DAGScheduler: Job 9 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:108, took 0.666057 s
25/03/25 14:28:27 INFO ClusterLoadMonitor: Removed query with execution ID:7. Current active queries:0
25/03/25 14:28:27 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:27 INFO QueryProfileListener: Query profile sent to logger, seq number: 7, app id: app-20250325142329-0000
25/03/25 14:28:27 INFO ClusterLoadMonitor: Added query with execution ID:8. Current active queries:1
25/03/25 14:28:27 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:27 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 1, computed in 0 ms.
25/03/25 14:28:27 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:110
25/03/25 14:28:27 INFO DAGScheduler: Got job 10 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:110) with 1 output partitions
25/03/25 14:28:27 INFO DAGScheduler: Final stage: ResultStage 14 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:110)
25/03/25 14:28:27 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:27 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:27 INFO DAGScheduler: Submitting ResultStage 14 (MapPartitionsRDD[83] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:110), which has no missing parents
25/03/25 14:28:27 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 14 (MapPartitionsRDD[83] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:110) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:27 INFO TaskSchedulerImpl: Adding task set 14.0 with 1 tasks resource profile 0
25/03/25 14:28:27 INFO TaskSetManager: TaskSet 14.0 using PreferredLocationsV1
25/03/25 14:28:27 INFO FairSchedulableBuilder: Added task set TaskSet_14.0 tasks to pool 1742912591363
25/03/25 14:28:27 INFO TaskSetManager: Starting task 0.0 in stage 14.0 (TID 10) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:27 INFO TaskSetManager: Finished task 0.0 in stage 14.0 (TID 10) in 78 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:28:27 INFO TaskSchedulerImpl: Removed TaskSet 14.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:27 INFO DAGScheduler: ResultStage 14 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:110) finished in 0.093 s
25/03/25 14:28:27 INFO DAGScheduler: Job 10 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:27 INFO TaskSchedulerImpl: Cancelling stage 14
25/03/25 14:28:27 INFO TaskSchedulerImpl: Killing all running tasks in stage 14: Stage finished
25/03/25 14:28:27 INFO DAGScheduler: Job 10 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:110, took 0.102283 s
25/03/25 14:28:27 INFO ClusterLoadMonitor: Removed query with execution ID:8. Current active queries:0
25/03/25 14:28:27 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:27 INFO QueryProfileListener: Query profile sent to logger, seq number: 8, app id: app-20250325142329-0000
25/03/25 14:28:27 INFO AbstractParser: EXPERIMENTAL: Query cached 6 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:27 INFO ClusterLoadMonitor: Added query with execution ID:9. Current active queries:1
25/03/25 14:28:27 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:27 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 39, computed in 0 ms.
25/03/25 14:28:28 INFO CodeGenerator: Code generated in 12.649628 ms
25/03/25 14:28:28 INFO DAGScheduler: Registering RDD 94 (mapPartitionsInternal at PhotonExec.scala:541) as input to shuffle 4
25/03/25 14:28:28 INFO DAGScheduler: Got map stage job 11 (submitMapStage at PhotonExec.scala:553) with 1 output partitions
25/03/25 14:28:28 INFO DAGScheduler: Final stage: ShuffleMapStage 15 (mapPartitionsInternal at PhotonExec.scala:541)
25/03/25 14:28:28 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:28 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:28 INFO DAGScheduler: Submitting ShuffleMapStage 15 (MapPartitionsRDD[94] at mapPartitionsInternal at PhotonExec.scala:541), which has no missing parents
25/03/25 14:28:28 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 15 (MapPartitionsRDD[94] at mapPartitionsInternal at PhotonExec.scala:541) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:28 INFO TaskSchedulerImpl: Adding task set 15.0 with 1 tasks resource profile 0
25/03/25 14:28:28 INFO TaskSetManager: TaskSet 15.0 using PreferredLocationsV1
25/03/25 14:28:28 INFO FairSchedulableBuilder: Added task set TaskSet_15.0 tasks to pool 1742912591363
25/03/25 14:28:28 INFO TaskSetManager: Starting task 0.0 in stage 15.0 (TID 11) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:28 INFO TaskSetManager: Finished task 0.0 in stage 15.0 (TID 11) in 486 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:28 INFO TaskSchedulerImpl: Removed TaskSet 15.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:28 INFO DAGScheduler: ShuffleMapStage 15 (mapPartitionsInternal at PhotonExec.scala:541) finished in 0.515 s
25/03/25 14:28:28 INFO DAGScheduler: looking for newly runnable stages
25/03/25 14:28:28 INFO DAGScheduler: running: Set()
25/03/25 14:28:28 INFO DAGScheduler: waiting: Set()
25/03/25 14:28:28 INFO DAGScheduler: failed: Set()
25/03/25 14:28:28 INFO SparkContext: Starting job: $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63
25/03/25 14:28:28 INFO DAGScheduler: Got job 12 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) with 1 output partitions
25/03/25 14:28:28 INFO DAGScheduler: Final stage: ResultStage 17 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63)
25/03/25 14:28:28 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 16)
25/03/25 14:28:28 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:28 INFO DAGScheduler: Submitting ResultStage 17 (MapPartitionsRDD[100] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63), which has no missing parents
25/03/25 14:28:28 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 17 (MapPartitionsRDD[100] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:28 INFO TaskSchedulerImpl: Adding task set 17.0 with 1 tasks resource profile 0
25/03/25 14:28:28 INFO TaskSetManager: TaskSet 17.0 using PreferredLocationsV1
25/03/25 14:28:28 INFO FairSchedulableBuilder: Added task set TaskSet_17.0 tasks to pool 1742912591363
25/03/25 14:28:28 INFO TaskSetManager: Starting task 0.0 in stage 17.0 (TID 12) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:28 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 4 to 10.139.64.14:45590
25/03/25 14:28:28 INFO TaskSetManager: Finished task 0.0 in stage 17.0 (TID 12) in 24 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:28 INFO TaskSchedulerImpl: Removed TaskSet 17.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:28 INFO DAGScheduler: ResultStage 17 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) finished in 0.035 s
25/03/25 14:28:28 INFO DAGScheduler: Job 12 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:28 INFO TaskSchedulerImpl: Cancelling stage 17
25/03/25 14:28:28 INFO TaskSchedulerImpl: Killing all running tasks in stage 17: Stage finished
25/03/25 14:28:28 INFO DAGScheduler: Job 12 finished: $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63, took 0.042058 s
25/03/25 14:28:28 INFO ClusterLoadMonitor: Removed query with execution ID:9. Current active queries:0
25/03/25 14:28:28 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:28 INFO QueryProfileListener: Query profile sent to logger, seq number: 9, app id: app-20250325142329-0000
25/03/25 14:28:29 INFO ClusterLoadMonitor: Added query with execution ID:10. Current active queries:1
25/03/25 14:28:29 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:29 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 1, computed in 0 ms.
25/03/25 14:28:29 INFO CodeGenerator: Code generated in 33.51901 ms
25/03/25 14:28:29 INFO CodeGenerator: Code generated in 27.049491 ms
25/03/25 14:28:29 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:144
25/03/25 14:28:29 INFO DAGScheduler: Got job 13 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:144) with 1 output partitions
25/03/25 14:28:29 INFO DAGScheduler: Final stage: ResultStage 18 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:144)
25/03/25 14:28:29 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:29 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:29 INFO DAGScheduler: Submitting ResultStage 18 (MapPartitionsRDD[106] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:144), which has no missing parents
25/03/25 14:28:29 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 18 (MapPartitionsRDD[106] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:144) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:29 INFO TaskSchedulerImpl: Adding task set 18.0 with 1 tasks resource profile 0
25/03/25 14:28:29 INFO TaskSetManager: TaskSet 18.0 using PreferredLocationsV1
25/03/25 14:28:29 INFO FairSchedulableBuilder: Added task set TaskSet_18.0 tasks to pool 1742912591363
25/03/25 14:28:29 INFO TaskSetManager: Starting task 0.0 in stage 18.0 (TID 13) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:29 INFO ClusterLoadAvgHelper: Current cluster load: 1, Old Ema: 1.0, New Ema: 1.0 
25/03/25 14:28:29 INFO TaskSetManager: Finished task 0.0 in stage 18.0 (TID 13) in 162 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:29 INFO TaskSchedulerImpl: Removed TaskSet 18.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:29 INFO DAGScheduler: ResultStage 18 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:144) finished in 0.183 s
25/03/25 14:28:29 INFO DAGScheduler: Job 13 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:29 INFO TaskSchedulerImpl: Cancelling stage 18
25/03/25 14:28:29 INFO TaskSchedulerImpl: Killing all running tasks in stage 18: Stage finished
25/03/25 14:28:29 INFO DAGScheduler: Job 13 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:144, took 0.188547 s
25/03/25 14:28:29 INFO ClusterLoadMonitor: Removed query with execution ID:10. Current active queries:0
25/03/25 14:28:29 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:29 INFO QueryProfileListener: Query profile sent to logger, seq number: 10, app id: app-20250325142329-0000
25/03/25 14:28:29 INFO ClusterLoadMonitor: Added query with execution ID:11. Current active queries:1
25/03/25 14:28:29 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:29 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 1, computed in 0 ms.
25/03/25 14:28:29 INFO CodeGenerator: Code generated in 12.907648 ms
25/03/25 14:28:29 INFO CodeGenerator: Code generated in 8.005244 ms
25/03/25 14:28:29 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:165
25/03/25 14:28:29 INFO DAGScheduler: Got job 14 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:165) with 1 output partitions
25/03/25 14:28:29 INFO DAGScheduler: Final stage: ResultStage 19 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:165)
25/03/25 14:28:29 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:29 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:29 INFO DAGScheduler: Submitting ResultStage 19 (MapPartitionsRDD[112] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:165), which has no missing parents
25/03/25 14:28:29 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 19 (MapPartitionsRDD[112] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:165) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:29 INFO TaskSchedulerImpl: Adding task set 19.0 with 1 tasks resource profile 0
25/03/25 14:28:29 INFO TaskSetManager: TaskSet 19.0 using PreferredLocationsV1
25/03/25 14:28:29 WARN FairSchedulableBuilder: A job was submitted with scheduler pool 1742912591363, which has not been configured. This can happen when the file that pools are read from isn't set, or when that file doesn't contain 1742912591363. Created 1742912591363 with default configuration (schedulingMode: FIFO, minShare: 0, weight: 1
25/03/25 14:28:29 INFO FairSchedulableBuilder: Added task set TaskSet_19.0 tasks to pool 1742912591363
25/03/25 14:28:29 INFO TaskSetManager: Starting task 0.0 in stage 19.0 (TID 14) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:30 INFO TaskSetManager: Finished task 0.0 in stage 19.0 (TID 14) in 126 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:30 INFO TaskSchedulerImpl: Removed TaskSet 19.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:30 INFO DAGScheduler: ResultStage 19 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:165) finished in 0.134 s
25/03/25 14:28:30 INFO DAGScheduler: Job 14 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:30 INFO TaskSchedulerImpl: Cancelling stage 19
25/03/25 14:28:30 INFO TaskSchedulerImpl: Killing all running tasks in stage 19: Stage finished
25/03/25 14:28:30 INFO DAGScheduler: Job 14 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:165, took 0.149082 s
25/03/25 14:28:30 INFO ClusterLoadMonitor: Removed query with execution ID:11. Current active queries:0
25/03/25 14:28:30 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:30 INFO QueryProfileListener: Query profile sent to logger, seq number: 11, app id: app-20250325142329-0000
25/03/25 14:28:30 INFO ClusterLoadMonitor: Added query with execution ID:12. Current active queries:1
25/03/25 14:28:30 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:30 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 39, computed in 0 ms.
25/03/25 14:28:30 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:30 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:30 INFO CodeGenerator: Code generated in 9.581585 ms
25/03/25 14:28:30 INFO DAGScheduler: Registering RDD 123 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170) as input to shuffle 5
25/03/25 14:28:30 INFO DAGScheduler: Got map stage job 15 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170) with 1 output partitions
25/03/25 14:28:30 INFO DAGScheduler: Final stage: ShuffleMapStage 20 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170)
25/03/25 14:28:30 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:30 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:30 INFO DAGScheduler: Submitting ShuffleMapStage 20 (MapPartitionsRDD[123] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170), which has no missing parents
25/03/25 14:28:30 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 20 (MapPartitionsRDD[123] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:30 INFO TaskSchedulerImpl: Adding task set 20.0 with 1 tasks resource profile 0
25/03/25 14:28:30 INFO TaskSetManager: TaskSet 20.0 using PreferredLocationsV1
25/03/25 14:28:30 INFO FairSchedulableBuilder: Added task set TaskSet_20.0 tasks to pool 1742912591363
25/03/25 14:28:30 INFO TaskSetManager: Starting task 0.0 in stage 20.0 (TID 15) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:30 INFO TaskSetManager: Finished task 0.0 in stage 20.0 (TID 15) in 332 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:30 INFO TaskSchedulerImpl: Removed TaskSet 20.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:30 INFO DAGScheduler: ShuffleMapStage 20 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170) finished in 0.354 s
25/03/25 14:28:30 INFO DAGScheduler: looking for newly runnable stages
25/03/25 14:28:30 INFO DAGScheduler: running: Set()
25/03/25 14:28:30 INFO DAGScheduler: waiting: Set()
25/03/25 14:28:30 INFO DAGScheduler: failed: Set()
25/03/25 14:28:30 INFO CodeGenerator: Code generated in 13.958915 ms
25/03/25 14:28:30 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170
25/03/25 14:28:30 INFO DAGScheduler: Got job 16 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170) with 1 output partitions
25/03/25 14:28:30 INFO DAGScheduler: Final stage: ResultStage 22 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170)
25/03/25 14:28:30 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 21)
25/03/25 14:28:30 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:30 INFO DAGScheduler: Submitting ResultStage 22 (MapPartitionsRDD[129] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170), which has no missing parents
25/03/25 14:28:30 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 22 (MapPartitionsRDD[129] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:30 INFO TaskSchedulerImpl: Adding task set 22.0 with 1 tasks resource profile 0
25/03/25 14:28:30 INFO TaskSetManager: TaskSet 22.0 using PreferredLocationsV1
25/03/25 14:28:30 INFO FairSchedulableBuilder: Added task set TaskSet_22.0 tasks to pool 1742912591363
25/03/25 14:28:30 INFO TaskSetManager: Starting task 0.0 in stage 22.0 (TID 16) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:30 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 5 to 10.139.64.15:48762
25/03/25 14:28:30 INFO TaskSetManager: Finished task 0.0 in stage 22.0 (TID 16) in 193 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:28:30 INFO TaskSchedulerImpl: Removed TaskSet 22.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:30 INFO DAGScheduler: ResultStage 22 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170) finished in 0.210 s
25/03/25 14:28:30 INFO DAGScheduler: Job 16 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:30 INFO TaskSchedulerImpl: Cancelling stage 22
25/03/25 14:28:30 INFO TaskSchedulerImpl: Killing all running tasks in stage 22: Stage finished
25/03/25 14:28:30 INFO DAGScheduler: Job 16 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:170, took 0.213008 s
25/03/25 14:28:30 INFO ClusterLoadMonitor: Removed query with execution ID:12. Current active queries:0
25/03/25 14:28:30 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:31 INFO QueryProfileListener: Query profile sent to logger, seq number: 12, app id: app-20250325142329-0000
25/03/25 14:28:31 WARN Column: Constructing trivially true equals predicate, '1 = 1'. Perhaps you need to use aliases.
25/03/25 14:28:31 WARN Column: Constructing trivially true equals predicate, '3 = 3'. Perhaps you need to use aliases.
25/03/25 14:28:31 WARN Column: Constructing trivially true equals predicate, '1 = 1'. Perhaps you need to use aliases.
25/03/25 14:28:31 WARN Column: Constructing trivially true equals predicate, '3 = 3'. Perhaps you need to use aliases.
25/03/25 14:28:31 INFO ClusterLoadMonitor: Added query with execution ID:13. Current active queries:1
25/03/25 14:28:31 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:31 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 20, computed in 0 ms.
25/03/25 14:28:31 INFO CodeGenerator: Code generated in 9.945135 ms
25/03/25 14:28:31 INFO DAGScheduler: Registering RDD 138 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183) as input to shuffle 6
25/03/25 14:28:31 INFO DAGScheduler: Got map stage job 17 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183) with 1 output partitions
25/03/25 14:28:31 INFO DAGScheduler: Final stage: ShuffleMapStage 23 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183)
25/03/25 14:28:31 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:31 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:31 INFO DAGScheduler: Submitting ShuffleMapStage 23 (MapPartitionsRDD[138] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183), which has no missing parents
25/03/25 14:28:31 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 23 (MapPartitionsRDD[138] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:31 INFO TaskSchedulerImpl: Adding task set 23.0 with 1 tasks resource profile 0
25/03/25 14:28:31 INFO TaskSetManager: TaskSet 23.0 using PreferredLocationsV1
25/03/25 14:28:31 INFO FairSchedulableBuilder: Added task set TaskSet_23.0 tasks to pool 1742912591363
25/03/25 14:28:31 INFO TaskSetManager: Starting task 0.0 in stage 23.0 (TID 17) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:31 INFO TaskSetManager: Finished task 0.0 in stage 23.0 (TID 17) in 180 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:31 INFO TaskSchedulerImpl: Removed TaskSet 23.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:31 INFO DAGScheduler: ShuffleMapStage 23 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183) finished in 0.183 s
25/03/25 14:28:31 INFO DAGScheduler: looking for newly runnable stages
25/03/25 14:28:31 INFO DAGScheduler: running: Set()
25/03/25 14:28:31 INFO DAGScheduler: waiting: Set()
25/03/25 14:28:31 INFO DAGScheduler: failed: Set()
25/03/25 14:28:31 INFO ShufflePartitionsUtil: For shuffle(6, advisory target size: 67108864, actual target size 1048576, minimum partition size: 1048576
25/03/25 14:28:31 INFO CodeGenerator: Code generated in 11.391122 ms
25/03/25 14:28:31 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183
25/03/25 14:28:31 INFO DAGScheduler: Got job 18 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183) with 1 output partitions
25/03/25 14:28:31 INFO DAGScheduler: Final stage: ResultStage 25 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183)
25/03/25 14:28:31 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 24)
25/03/25 14:28:31 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:31 INFO DAGScheduler: Submitting ResultStage 25 (MapPartitionsRDD[144] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183), which has no missing parents
25/03/25 14:28:31 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 25 (MapPartitionsRDD[144] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:31 INFO TaskSchedulerImpl: Adding task set 25.0 with 1 tasks resource profile 0
25/03/25 14:28:31 INFO TaskSetManager: TaskSet 25.0 using PreferredLocationsV1
25/03/25 14:28:31 INFO FairSchedulableBuilder: Added task set TaskSet_25.0 tasks to pool 1742912591363
25/03/25 14:28:31 INFO TaskSetManager: Starting task 0.0 in stage 25.0 (TID 18) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:31 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 6 to 10.139.64.15:48762
25/03/25 14:28:31 INFO TaskSetManager: Finished task 0.0 in stage 25.0 (TID 18) in 66 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:28:31 INFO TaskSchedulerImpl: Removed TaskSet 25.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:31 INFO DAGScheduler: ResultStage 25 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183) finished in 0.072 s
25/03/25 14:28:31 INFO DAGScheduler: Job 18 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:31 INFO TaskSchedulerImpl: Cancelling stage 25
25/03/25 14:28:31 INFO TaskSchedulerImpl: Killing all running tasks in stage 25: Stage finished
25/03/25 14:28:31 INFO DAGScheduler: Job 18 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:183, took 0.077871 s
25/03/25 14:28:31 INFO ClusterLoadMonitor: Removed query with execution ID:13. Current active queries:0
25/03/25 14:28:31 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:31 INFO QueryProfileListener: Query profile sent to logger, seq number: 13, app id: app-20250325142329-0000
25/03/25 14:28:31 INFO ClusterLoadMonitor: Added query with execution ID:14. Current active queries:1
25/03/25 14:28:31 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:31 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 31, computed in 0 ms.
25/03/25 14:28:31 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:31 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:31 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:31 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:31 INFO CodeGenerator: Code generated in 11.240139 ms
25/03/25 14:28:31 INFO DAGScheduler: Registering RDD 154 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184) as input to shuffle 7
25/03/25 14:28:31 INFO DAGScheduler: Got map stage job 19 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184) with 1 output partitions
25/03/25 14:28:31 INFO DAGScheduler: Final stage: ShuffleMapStage 26 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184)
25/03/25 14:28:31 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:31 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:31 INFO DAGScheduler: Submitting ShuffleMapStage 26 (MapPartitionsRDD[154] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184), which has no missing parents
25/03/25 14:28:31 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 26 (MapPartitionsRDD[154] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:31 INFO TaskSchedulerImpl: Adding task set 26.0 with 1 tasks resource profile 0
25/03/25 14:28:31 INFO TaskSetManager: TaskSet 26.0 using PreferredLocationsV1
25/03/25 14:28:31 INFO FairSchedulableBuilder: Added task set TaskSet_26.0 tasks to pool 1742912591363
25/03/25 14:28:31 INFO TaskSetManager: Starting task 0.0 in stage 26.0 (TID 19) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:31 INFO TaskSetManager: Finished task 0.0 in stage 26.0 (TID 19) in 104 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:31 INFO TaskSchedulerImpl: Removed TaskSet 26.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:31 INFO DAGScheduler: ShuffleMapStage 26 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184) finished in 0.114 s
25/03/25 14:28:31 INFO DAGScheduler: looking for newly runnable stages
25/03/25 14:28:31 INFO DAGScheduler: running: Set()
25/03/25 14:28:31 INFO DAGScheduler: waiting: Set()
25/03/25 14:28:31 INFO DAGScheduler: failed: Set()
25/03/25 14:28:31 INFO ShufflePartitionsUtil: For shuffle(7, advisory target size: 67108864, actual target size 1048576, minimum partition size: 1048576
25/03/25 14:28:31 INFO CodeGenerator: Code generated in 18.982562 ms
25/03/25 14:28:31 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184
25/03/25 14:28:31 INFO DAGScheduler: Got job 20 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184) with 1 output partitions
25/03/25 14:28:31 INFO DAGScheduler: Final stage: ResultStage 28 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184)
25/03/25 14:28:31 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 27)
25/03/25 14:28:31 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:31 INFO DAGScheduler: Submitting ResultStage 28 (MapPartitionsRDD[160] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184), which has no missing parents
25/03/25 14:28:31 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 28 (MapPartitionsRDD[160] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:31 INFO TaskSchedulerImpl: Adding task set 28.0 with 1 tasks resource profile 0
25/03/25 14:28:31 INFO TaskSetManager: TaskSet 28.0 using PreferredLocationsV1
25/03/25 14:28:31 INFO FairSchedulableBuilder: Added task set TaskSet_28.0 tasks to pool 1742912591363
25/03/25 14:28:31 INFO TaskSetManager: Starting task 0.0 in stage 28.0 (TID 20) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:31 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 7 to 10.139.64.15:48762
25/03/25 14:28:32 INFO TaskSetManager: Finished task 0.0 in stage 28.0 (TID 20) in 83 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:28:32 INFO TaskSchedulerImpl: Removed TaskSet 28.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:32 INFO DAGScheduler: ResultStage 28 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184) finished in 0.087 s
25/03/25 14:28:32 INFO DAGScheduler: Job 20 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:32 INFO TaskSchedulerImpl: Cancelling stage 28
25/03/25 14:28:32 INFO TaskSchedulerImpl: Killing all running tasks in stage 28: Stage finished
25/03/25 14:28:32 INFO DAGScheduler: Job 20 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:184, took 0.090040 s
25/03/25 14:28:32 INFO ClusterLoadMonitor: Removed query with execution ID:14. Current active queries:0
25/03/25 14:28:32 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:32 INFO ClusterLoadMonitor: Added query with execution ID:15. Current active queries:1
25/03/25 14:28:32 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:32 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 31, computed in 0 ms.
25/03/25 14:28:32 INFO QueryProfileListener: Query profile sent to logger, seq number: 14, app id: app-20250325142329-0000
25/03/25 14:28:32 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:32 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:32 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:32 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:32 INFO DAGScheduler: Registering RDD 170 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185) as input to shuffle 8
25/03/25 14:28:32 INFO DAGScheduler: Got map stage job 21 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185) with 1 output partitions
25/03/25 14:28:32 INFO DAGScheduler: Final stage: ShuffleMapStage 29 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185)
25/03/25 14:28:32 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:32 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:32 INFO DAGScheduler: Submitting ShuffleMapStage 29 (MapPartitionsRDD[170] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185), which has no missing parents
25/03/25 14:28:32 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 29 (MapPartitionsRDD[170] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:32 INFO TaskSchedulerImpl: Adding task set 29.0 with 1 tasks resource profile 0
25/03/25 14:28:32 INFO TaskSetManager: TaskSet 29.0 using PreferredLocationsV1
25/03/25 14:28:32 INFO FairSchedulableBuilder: Added task set TaskSet_29.0 tasks to pool 1742912591363
25/03/25 14:28:32 INFO TaskSetManager: Starting task 0.0 in stage 29.0 (TID 21) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:32 INFO TaskSetManager: Finished task 0.0 in stage 29.0 (TID 21) in 81 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:32 INFO TaskSchedulerImpl: Removed TaskSet 29.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:32 INFO DAGScheduler: ShuffleMapStage 29 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185) finished in 0.088 s
25/03/25 14:28:32 INFO DAGScheduler: looking for newly runnable stages
25/03/25 14:28:32 INFO DAGScheduler: running: Set()
25/03/25 14:28:32 INFO DAGScheduler: waiting: Set()
25/03/25 14:28:32 INFO DAGScheduler: failed: Set()
25/03/25 14:28:32 INFO ShufflePartitionsUtil: For shuffle(8, advisory target size: 67108864, actual target size 1048576, minimum partition size: 1048576
25/03/25 14:28:32 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185
25/03/25 14:28:32 INFO DAGScheduler: Got job 22 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185) with 1 output partitions
25/03/25 14:28:32 INFO DAGScheduler: Final stage: ResultStage 31 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185)
25/03/25 14:28:32 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 30)
25/03/25 14:28:32 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:32 INFO DAGScheduler: Submitting ResultStage 31 (MapPartitionsRDD[176] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185), which has no missing parents
25/03/25 14:28:32 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 31 (MapPartitionsRDD[176] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:32 INFO TaskSchedulerImpl: Adding task set 31.0 with 1 tasks resource profile 0
25/03/25 14:28:32 INFO TaskSetManager: TaskSet 31.0 using PreferredLocationsV1
25/03/25 14:28:32 INFO FairSchedulableBuilder: Added task set TaskSet_31.0 tasks to pool 1742912591363
25/03/25 14:28:32 INFO TaskSetManager: Starting task 0.0 in stage 31.0 (TID 22) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:32 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 8 to 10.139.64.14:45590
25/03/25 14:28:32 INFO TaskSetManager: Finished task 0.0 in stage 31.0 (TID 22) in 30 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:32 INFO TaskSchedulerImpl: Removed TaskSet 31.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:32 INFO DAGScheduler: ResultStage 31 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185) finished in 0.034 s
25/03/25 14:28:32 INFO DAGScheduler: Job 22 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:32 INFO TaskSchedulerImpl: Cancelling stage 31
25/03/25 14:28:32 INFO TaskSchedulerImpl: Killing all running tasks in stage 31: Stage finished
25/03/25 14:28:32 INFO DAGScheduler: Job 22 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:185, took 0.035570 s
25/03/25 14:28:32 INFO ClusterLoadMonitor: Removed query with execution ID:15. Current active queries:0
25/03/25 14:28:32 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:32 INFO ClusterLoadAvgHelper: Current cluster load: 0, Old Ema: 1.0, New Ema: 0.85 
25/03/25 14:28:32 INFO QueryProfileListener: Query profile sent to logger, seq number: 15, app id: app-20250325142329-0000
25/03/25 14:28:32 INFO ClusterLoadMonitor: Added query with execution ID:16. Current active queries:1
25/03/25 14:28:32 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 0, New parallelism: 8
25/03/25 14:28:32 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 22, computed in 0 ms.
25/03/25 14:28:32 INFO DAGScheduler: Registering RDD 186 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199) as input to shuffle 9
25/03/25 14:28:32 INFO DAGScheduler: Got map stage job 23 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199) with 1 output partitions
25/03/25 14:28:32 INFO DAGScheduler: Final stage: ShuffleMapStage 32 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199)
25/03/25 14:28:32 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:32 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:32 INFO DAGScheduler: Submitting ShuffleMapStage 32 (MapPartitionsRDD[186] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199), which has no missing parents
25/03/25 14:28:32 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 32 (MapPartitionsRDD[186] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:32 INFO TaskSchedulerImpl: Adding task set 32.0 with 1 tasks resource profile 0
25/03/25 14:28:32 INFO TaskSetManager: TaskSet 32.0 using PreferredLocationsV1
25/03/25 14:28:32 INFO FairSchedulableBuilder: Added task set TaskSet_32.0 tasks to pool 1742912591363
25/03/25 14:28:32 INFO TaskSetManager: Starting task 0.0 in stage 32.0 (TID 23) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:32 INFO TaskSetManager: Finished task 0.0 in stage 32.0 (TID 23) in 91 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:32 INFO TaskSchedulerImpl: Removed TaskSet 32.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:32 INFO DAGScheduler: ShuffleMapStage 32 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199) finished in 0.095 s
25/03/25 14:28:32 INFO DAGScheduler: looking for newly runnable stages
25/03/25 14:28:32 INFO DAGScheduler: running: Set()
25/03/25 14:28:32 INFO DAGScheduler: waiting: Set()
25/03/25 14:28:32 INFO DAGScheduler: failed: Set()
25/03/25 14:28:32 INFO ShufflePartitionsUtil: For shuffle(9, advisory target size: 67108864, actual target size 1048576, minimum partition size: 1048576
25/03/25 14:28:32 INFO CodeGenerator: Code generated in 10.339915 ms
25/03/25 14:28:32 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199
25/03/25 14:28:32 INFO DAGScheduler: Got job 24 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199) with 1 output partitions
25/03/25 14:28:32 INFO DAGScheduler: Final stage: ResultStage 34 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199)
25/03/25 14:28:32 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 33)
25/03/25 14:28:32 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:32 INFO DAGScheduler: Submitting ResultStage 34 (MapPartitionsRDD[192] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199), which has no missing parents
25/03/25 14:28:32 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 34 (MapPartitionsRDD[192] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:32 INFO TaskSchedulerImpl: Adding task set 34.0 with 1 tasks resource profile 0
25/03/25 14:28:32 INFO TaskSetManager: TaskSet 34.0 using PreferredLocationsV1
25/03/25 14:28:32 INFO FairSchedulableBuilder: Added task set TaskSet_34.0 tasks to pool 1742912591363
25/03/25 14:28:32 INFO TaskSetManager: Starting task 0.0 in stage 34.0 (TID 24) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:32 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 9 to 10.139.64.15:48762
25/03/25 14:28:32 INFO TaskSetManager: Finished task 0.0 in stage 34.0 (TID 24) in 61 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:28:32 INFO TaskSchedulerImpl: Removed TaskSet 34.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:32 INFO DAGScheduler: ResultStage 34 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199) finished in 0.063 s
25/03/25 14:28:32 INFO DAGScheduler: Job 24 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:32 INFO TaskSchedulerImpl: Cancelling stage 34
25/03/25 14:28:32 INFO TaskSchedulerImpl: Killing all running tasks in stage 34: Stage finished
25/03/25 14:28:32 INFO DAGScheduler: Job 24 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:199, took 0.065552 s
25/03/25 14:28:32 INFO ClusterLoadMonitor: Removed query with execution ID:16. Current active queries:0
25/03/25 14:28:32 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:32 INFO QueryProfileListener: Query profile sent to logger, seq number: 16, app id: app-20250325142329-0000
25/03/25 14:28:32 INFO ClusterLoadMonitor: Added query with execution ID:17. Current active queries:1
25/03/25 14:28:32 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 0, New parallelism: 8
25/03/25 14:28:32 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 474, computed in 0 ms.
25/03/25 14:28:33 INFO PropagateEmptyRelation: 
Removed leaf nodes:
- Relation [siso_flg#352,matching_sk#353,sys_source_system_sk#354,sys_modification_dts#355,client_nm#356,client_map_cd#357] JDBCRelation((select siso_flg, matching_sk, sys_source_system_sk, sys_modification_dts, convert(varchar(max), client_nm) COLLATE SQL_Latin1_General_Cp1251_CS_AS as client_nm, client_map_cd from mdm.client_map) table_data) [numPartitions=1]
- Relation [siso_flg#358,matching_sk#359,sys_source_system_sk#360,sys_modification_dts#361,client_nm#362,client_map_cd#363] JDBCRelation((select siso_flg, matching_sk, sys_source_system_sk, sys_modification_dts, convert(varchar(max), client_nm) COLLATE SQL_Latin1_General_Cp1251_CS_AS as client_nm, client_map_cd from mdm.client_map) table_data) [numPartitions=1]
25/03/25 14:28:33 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:33 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:33 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:33 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:33 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:33 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:33 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:33 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:33 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
25/03/25 14:28:33 INFO CodeGenerator: Code generated in 73.415088 ms
25/03/25 14:28:33 INFO CodeGenerator: Code generated in 12.690664 ms
25/03/25 14:28:33 INFO CodeGenerator: Code generated in 9.018731 ms
25/03/25 14:28:33 INFO CodeGenerator: Code generated in 13.146337 ms
25/03/25 14:28:33 INFO CodeGenerator: Code generated in 9.001449 ms
25/03/25 14:28:33 INFO DAGScheduler: Registering RDD 218 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) as input to shuffle 10
25/03/25 14:28:33 INFO DAGScheduler: Got map stage job 25 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) with 1 output partitions
25/03/25 14:28:33 INFO DAGScheduler: Final stage: ShuffleMapStage 35 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63)
25/03/25 14:28:33 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:33 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:33 INFO DAGScheduler: Submitting ShuffleMapStage 35 (MapPartitionsRDD[218] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63), which has no missing parents
25/03/25 14:28:33 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 35 (MapPartitionsRDD[218] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:33 INFO TaskSchedulerImpl: Adding task set 35.0 with 1 tasks resource profile 0
25/03/25 14:28:33 INFO TaskSetManager: TaskSet 35.0 using PreferredLocationsV1
25/03/25 14:28:33 INFO FairSchedulableBuilder: Added task set TaskSet_35.0 tasks to pool 1742912591363
25/03/25 14:28:33 INFO TaskSetManager: Starting task 0.0 in stage 35.0 (TID 25) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:35 INFO ClusterLoadAvgHelper: Current cluster load: 1, Old Ema: 0.85, New Ema: 1.0 
25/03/25 14:28:38 INFO ClusterLoadAvgHelper: Current cluster load: 1, Old Ema: 1.0, New Ema: 1.0 
25/03/25 14:28:40 WARN DynamicSparkConfContextImpl: Ignored update because id 1742912438364 < 1742912438364; source: CONFIG_FILE
25/03/25 14:28:40 INFO DatabricksILoop$: Received SAFEr configs with version 1742912438364
25/03/25 14:28:40 INFO HiveMetaStore: 1: get_database: default
25/03/25 14:28:40 INFO audit: ugi=root	ip=unknown-ip-addr	cmd=get_database: default	
25/03/25 14:28:40 INFO HiveMetaStore: 1: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
25/03/25 14:28:40 INFO ObjectStore: ObjectStore, initialize called
25/03/25 14:28:40 INFO ObjectStore: Initialized ObjectStore
25/03/25 14:28:40 INFO DriverCorral: Metastore health check ok
25/03/25 14:28:40 INFO DriverCorral: DBFS health check ok
25/03/25 14:28:41 INFO ClusterLoadAvgHelper: Current cluster load: 1, Old Ema: 1.0, New Ema: 1.0 
25/03/25 14:28:44 INFO ClusterLoadAvgHelper: Current cluster load: 1, Old Ema: 1.0, New Ema: 1.0 
25/03/25 14:28:46 INFO TaskSetManager: Finished task 0.0 in stage 35.0 (TID 25) in 12157 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:46 INFO TaskSchedulerImpl: Removed TaskSet 35.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:46 INFO DAGScheduler: ShuffleMapStage 35 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) finished in 12.187 s
25/03/25 14:28:46 INFO DAGScheduler: looking for newly runnable stages
25/03/25 14:28:46 INFO DAGScheduler: running: Set()
25/03/25 14:28:46 INFO DAGScheduler: waiting: Set()
25/03/25 14:28:46 INFO DAGScheduler: failed: Set()
25/03/25 14:28:46 INFO AQEPropagateEmptyRelation: 
Removed leaf nodes:
- LogicalQueryStage AggregatePart [siso_flg#382, matching_sk#383, sys_source_system_sk#384, sys_modification_dts#385, client_nm#386, client_map_cd#387, concat_match_col#96, siso_flg#388, matching_sk#389, sys_source_system_sk#390, sys_modification_dts#391, client_nm#392, client_map_cd#393, concat_match_col#148, 0.9999 AS 0.9999#445], false, ShuffleQueryStage 0, Statistics(sizeInBytes=0.0 B, rowCount=0, ColumnStat: N/A, isRuntime=true)
25/03/25 14:28:46 INFO AQEPropagateEmptyRelation: 
Removed leaf nodes:
- EmptyRelation AggregatePart [siso_flg#382, matching_sk#383, sys_source_system_sk#384, sys_modification_dts#385, client_nm#386, client_map_cd#387, concat_match_col#96, siso_flg#388, matching_sk#389, sys_source_system_sk#390, sys_modification_dts#391, client_nm#392, client_map_cd#393, concat_match_col#148, 0.9999#445], true
25/03/25 14:28:46 INFO AQEPropagateEmptyRelation: 
Removed leaf nodes:
- EmptyRelation Project
25/03/25 14:28:46 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:46 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:46 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:46 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:46 INFO CodeGenerator: Code generated in 12.551364 ms
25/03/25 14:28:46 INFO CodeGenerator: Code generated in 7.493256 ms
25/03/25 14:28:46 INFO CodeGenerator: Code generated in 6.785261 ms
25/03/25 14:28:46 INFO CodeGenerator: Code generated in 7.942397 ms
25/03/25 14:28:46 INFO CodeGenerator: Code generated in 6.979624 ms
25/03/25 14:28:46 INFO DAGScheduler: Registering RDD 238 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) as input to shuffle 11
25/03/25 14:28:46 INFO DAGScheduler: Got map stage job 26 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) with 1 output partitions
25/03/25 14:28:46 INFO DAGScheduler: Final stage: ShuffleMapStage 36 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63)
25/03/25 14:28:46 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:46 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:46 INFO DAGScheduler: Submitting ShuffleMapStage 36 (MapPartitionsRDD[238] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63), which has no missing parents
25/03/25 14:28:46 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 36 (MapPartitionsRDD[238] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:46 INFO TaskSchedulerImpl: Adding task set 36.0 with 1 tasks resource profile 0
25/03/25 14:28:46 INFO TaskSetManager: TaskSet 36.0 using PreferredLocationsV1
25/03/25 14:28:46 INFO FairSchedulableBuilder: Added task set TaskSet_36.0 tasks to pool 1742912591363
25/03/25 14:28:46 INFO TaskSetManager: Starting task 0.0 in stage 36.0 (TID 26) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:47 INFO ClusterLoadAvgHelper: Current cluster load: 1, Old Ema: 1.0, New Ema: 1.0 
25/03/25 14:28:50 INFO ClusterLoadAvgHelper: Current cluster load: 1, Old Ema: 1.0, New Ema: 1.0 
25/03/25 14:28:53 INFO ClusterLoadAvgHelper: Current cluster load: 1, Old Ema: 1.0, New Ema: 1.0 
25/03/25 14:28:56 INFO ClusterLoadAvgHelper: Current cluster load: 1, Old Ema: 1.0, New Ema: 1.0 
25/03/25 14:28:56 INFO TaskSetManager: Finished task 0.0 in stage 36.0 (TID 26) in 10674 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:28:56 INFO TaskSchedulerImpl: Removed TaskSet 36.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:56 INFO DAGScheduler: ShuffleMapStage 36 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) finished in 10.704 s
25/03/25 14:28:56 INFO DAGScheduler: looking for newly runnable stages
25/03/25 14:28:56 INFO DAGScheduler: running: Set()
25/03/25 14:28:56 INFO DAGScheduler: waiting: Set()
25/03/25 14:28:56 INFO DAGScheduler: failed: Set()
25/03/25 14:28:56 INFO CodeGenerator: Code generated in 9.658898 ms
25/03/25 14:28:56 INFO SparkContext: Starting job: $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63
25/03/25 14:28:56 INFO DAGScheduler: Got job 27 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) with 1 output partitions
25/03/25 14:28:56 INFO DAGScheduler: Final stage: ResultStage 38 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63)
25/03/25 14:28:56 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 37)
25/03/25 14:28:56 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:56 INFO DAGScheduler: Submitting ResultStage 38 (MapPartitionsRDD[240] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63), which has no missing parents
25/03/25 14:28:56 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 38 (MapPartitionsRDD[240] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:56 INFO TaskSchedulerImpl: Adding task set 38.0 with 1 tasks resource profile 0
25/03/25 14:28:56 INFO TaskSetManager: TaskSet 38.0 using PreferredLocationsV1
25/03/25 14:28:56 INFO FairSchedulableBuilder: Added task set TaskSet_38.0 tasks to pool 1742912591363
25/03/25 14:28:56 INFO TaskSetManager: Starting task 0.0 in stage 38.0 (TID 27) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:56 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 11 to 10.139.64.15:48762
25/03/25 14:28:56 INFO TaskSetManager: Finished task 0.0 in stage 38.0 (TID 27) in 65 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:28:56 INFO TaskSchedulerImpl: Removed TaskSet 38.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:56 INFO DAGScheduler: ResultStage 38 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) finished in 0.075 s
25/03/25 14:28:56 INFO DAGScheduler: Job 27 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:56 INFO TaskSchedulerImpl: Cancelling stage 38
25/03/25 14:28:56 INFO TaskSchedulerImpl: Killing all running tasks in stage 38: Stage finished
25/03/25 14:28:56 INFO DAGScheduler: Job 27 finished: $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63, took 0.082066 s
25/03/25 14:28:57 INFO ClusterLoadMonitor: Removed query with execution ID:17. Current active queries:0
25/03/25 14:28:57 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:57 INFO ClusterLoadMonitor: Added query with execution ID:18. Current active queries:1
25/03/25 14:28:57 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:57 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 50, computed in 0 ms.
25/03/25 14:28:57 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:57 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:57 INFO CodeGenerator: Code generated in 13.043882 ms
25/03/25 14:28:57 INFO DAGScheduler: Registering RDD 251 (mapPartitionsInternal at PhotonExec.scala:541) as input to shuffle 12
25/03/25 14:28:57 INFO DAGScheduler: Got map stage job 28 (submitMapStage at PhotonExec.scala:553) with 1 output partitions
25/03/25 14:28:57 INFO DAGScheduler: Final stage: ShuffleMapStage 39 (mapPartitionsInternal at PhotonExec.scala:541)
25/03/25 14:28:57 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:57 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:57 INFO DAGScheduler: Submitting ShuffleMapStage 39 (MapPartitionsRDD[251] at mapPartitionsInternal at PhotonExec.scala:541), which has no missing parents
25/03/25 14:28:57 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 39 (MapPartitionsRDD[251] at mapPartitionsInternal at PhotonExec.scala:541) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:57 INFO TaskSchedulerImpl: Adding task set 39.0 with 1 tasks resource profile 0
25/03/25 14:28:57 INFO TaskSetManager: TaskSet 39.0 using PreferredLocationsV1
25/03/25 14:28:57 INFO FairSchedulableBuilder: Added task set TaskSet_39.0 tasks to pool 1742912591363
25/03/25 14:28:57 INFO TaskSetManager: Starting task 0.0 in stage 39.0 (TID 28) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:57 INFO QueryProfileListener: Query profile sent to logger, seq number: 17, app id: app-20250325142329-0000
25/03/25 14:28:57 INFO TaskSetManager: Finished task 0.0 in stage 39.0 (TID 28) in 266 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:28:57 INFO TaskSchedulerImpl: Removed TaskSet 39.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:57 INFO DAGScheduler: ShuffleMapStage 39 (mapPartitionsInternal at PhotonExec.scala:541) finished in 0.271 s
25/03/25 14:28:57 INFO DAGScheduler: looking for newly runnable stages
25/03/25 14:28:57 INFO DAGScheduler: running: Set()
25/03/25 14:28:57 INFO DAGScheduler: waiting: Set()
25/03/25 14:28:57 INFO DAGScheduler: failed: Set()
25/03/25 14:28:57 INFO CodeGenerator: Code generated in 7.992699 ms
25/03/25 14:28:57 INFO SparkContext: Starting job: $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63
25/03/25 14:28:57 INFO DAGScheduler: Got job 29 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) with 1 output partitions
25/03/25 14:28:57 INFO DAGScheduler: Final stage: ResultStage 41 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63)
25/03/25 14:28:57 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 40)
25/03/25 14:28:57 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:57 INFO DAGScheduler: Submitting ResultStage 41 (MapPartitionsRDD[257] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63), which has no missing parents
25/03/25 14:28:57 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 41 (MapPartitionsRDD[257] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:57 INFO TaskSchedulerImpl: Adding task set 41.0 with 1 tasks resource profile 0
25/03/25 14:28:57 INFO TaskSetManager: TaskSet 41.0 using PreferredLocationsV1
25/03/25 14:28:57 INFO FairSchedulableBuilder: Added task set TaskSet_41.0 tasks to pool 1742912591363
25/03/25 14:28:57 INFO TaskSetManager: Starting task 0.0 in stage 41.0 (TID 29) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:57 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 12 to 10.139.64.14:45590
25/03/25 14:28:57 INFO TaskSetManager: Finished task 0.0 in stage 41.0 (TID 29) in 26 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:57 INFO TaskSchedulerImpl: Removed TaskSet 41.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:57 INFO DAGScheduler: ResultStage 41 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) finished in 0.033 s
25/03/25 14:28:57 INFO DAGScheduler: Job 29 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:57 INFO TaskSchedulerImpl: Cancelling stage 41
25/03/25 14:28:57 INFO TaskSchedulerImpl: Killing all running tasks in stage 41: Stage finished
25/03/25 14:28:57 INFO DAGScheduler: Job 29 finished: $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63, took 0.035947 s
25/03/25 14:28:57 INFO ClusterLoadMonitor: Removed query with execution ID:18. Current active queries:0
25/03/25 14:28:57 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:57 INFO QueryProfileListener: Query profile sent to logger, seq number: 18, app id: app-20250325142329-0000
25/03/25 14:28:57 INFO ClusterLoadMonitor: Added query with execution ID:19. Current active queries:1
25/03/25 14:28:57 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:57 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 38, computed in 0 ms.
25/03/25 14:28:57 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:57 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:57 INFO CodeGenerator: Code generated in 10.715023 ms
25/03/25 14:28:57 INFO DAGScheduler: Registering RDD 268 (mapPartitionsInternal at PhotonExec.scala:541) as input to shuffle 13
25/03/25 14:28:57 INFO DAGScheduler: Got map stage job 30 (submitMapStage at PhotonExec.scala:553) with 1 output partitions
25/03/25 14:28:57 INFO DAGScheduler: Final stage: ShuffleMapStage 42 (mapPartitionsInternal at PhotonExec.scala:541)
25/03/25 14:28:57 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:57 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:57 INFO DAGScheduler: Submitting ShuffleMapStage 42 (MapPartitionsRDD[268] at mapPartitionsInternal at PhotonExec.scala:541), which has no missing parents
25/03/25 14:28:57 INFO DAGScheduler: Submitting 1 missing tasks from ShuffleMapStage 42 (MapPartitionsRDD[268] at mapPartitionsInternal at PhotonExec.scala:541) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:57 INFO TaskSchedulerImpl: Adding task set 42.0 with 1 tasks resource profile 0
25/03/25 14:28:57 INFO TaskSetManager: TaskSet 42.0 using PreferredLocationsV1
25/03/25 14:28:57 INFO FairSchedulableBuilder: Added task set TaskSet_42.0 tasks to pool 1742912591363
25/03/25 14:28:57 INFO TaskSetManager: Starting task 0.0 in stage 42.0 (TID 30) (10.139.64.14, executor 0, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:57 INFO TaskSetManager: Finished task 0.0 in stage 42.0 (TID 30) in 231 ms on 10.139.64.14 (executor 0) (1/1)
25/03/25 14:28:57 INFO TaskSchedulerImpl: Removed TaskSet 42.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:57 INFO DAGScheduler: ShuffleMapStage 42 (mapPartitionsInternal at PhotonExec.scala:541) finished in 0.236 s
25/03/25 14:28:57 INFO DAGScheduler: looking for newly runnable stages
25/03/25 14:28:57 INFO DAGScheduler: running: Set()
25/03/25 14:28:57 INFO DAGScheduler: waiting: Set()
25/03/25 14:28:57 INFO DAGScheduler: failed: Set()
25/03/25 14:28:57 INFO SparkContext: Starting job: $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63
25/03/25 14:28:57 INFO DAGScheduler: Got job 31 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) with 1 output partitions
25/03/25 14:28:57 INFO DAGScheduler: Final stage: ResultStage 44 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63)
25/03/25 14:28:57 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 43)
25/03/25 14:28:57 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:57 INFO DAGScheduler: Submitting ResultStage 44 (MapPartitionsRDD[274] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63), which has no missing parents
25/03/25 14:28:57 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 44 (MapPartitionsRDD[274] at $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:57 INFO TaskSchedulerImpl: Adding task set 44.0 with 1 tasks resource profile 0
25/03/25 14:28:57 INFO TaskSetManager: TaskSet 44.0 using PreferredLocationsV1
25/03/25 14:28:57 INFO FairSchedulableBuilder: Added task set TaskSet_44.0 tasks to pool 1742912591363
25/03/25 14:28:57 INFO TaskSetManager: Starting task 0.0 in stage 44.0 (TID 31) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:57 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 13 to 10.139.64.15:48762
25/03/25 14:28:57 INFO TaskSetManager: Finished task 0.0 in stage 44.0 (TID 31) in 44 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:28:57 INFO TaskSchedulerImpl: Removed TaskSet 44.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:57 INFO DAGScheduler: ResultStage 44 ($anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63) finished in 0.051 s
25/03/25 14:28:57 INFO DAGScheduler: Job 31 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:57 INFO TaskSchedulerImpl: Cancelling stage 44
25/03/25 14:28:57 INFO TaskSchedulerImpl: Killing all running tasks in stage 44: Stage finished
25/03/25 14:28:57 INFO DAGScheduler: Job 31 finished: $anonfun$withThreadLocalCaptured$7 at LexicalThreadLocal.scala:63, took 0.053417 s
25/03/25 14:28:57 INFO ClusterLoadMonitor: Removed query with execution ID:19. Current active queries:0
25/03/25 14:28:57 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:58 INFO QueryProfileListener: Query profile sent to logger, seq number: 19, app id: app-20250325142329-0000
25/03/25 14:28:58 INFO ClusterLoadMonitor: Added query with execution ID:20. Current active queries:1
25/03/25 14:28:58 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:58 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 46, computed in 0 ms.
25/03/25 14:28:58 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:58 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:58 INFO BaseAllocator: Debug mode disabled. Enable with the VM option -Darrow.memory.debug.allocator=true.
25/03/25 14:28:58 INFO DefaultAllocationManagerOption: allocation manager type not specified, using netty as the default type
25/03/25 14:28:58 INFO CheckAllocator: Using DefaultAllocationManager at memory-netty--org.apache.arrow__arrow-memory-netty__15.0.0.jar!/org/apache/arrow/memory/DefaultAllocationManagerFactory.class
25/03/25 14:28:58 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:233
25/03/25 14:28:58 INFO DAGScheduler: Got job 32 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:233) with 1 output partitions
25/03/25 14:28:58 INFO DAGScheduler: Final stage: ResultStage 45 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:233)
25/03/25 14:28:58 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:28:58 INFO DAGScheduler: Missing parents: List()
25/03/25 14:28:58 INFO DAGScheduler: Submitting ResultStage 45 (MapPartitionsRDD[283] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:233), which has no missing parents
25/03/25 14:28:58 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 45 (MapPartitionsRDD[283] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:233) (first 15 tasks are for partitions Vector(0))
25/03/25 14:28:58 INFO TaskSchedulerImpl: Adding task set 45.0 with 1 tasks resource profile 0
25/03/25 14:28:58 INFO TaskSetManager: TaskSet 45.0 using PreferredLocationsV1
25/03/25 14:28:58 INFO FairSchedulableBuilder: Added task set TaskSet_45.0 tasks to pool 1742912591363
25/03/25 14:28:58 INFO TaskSetManager: Starting task 0.0 in stage 45.0 (TID 32) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:28:59 INFO ClusterLoadAvgHelper: Current cluster load: 1, Old Ema: 1.0, New Ema: 1.0 
25/03/25 14:28:59 INFO TaskSetManager: Finished task 0.0 in stage 45.0 (TID 32) in 1569 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:28:59 INFO TaskSchedulerImpl: Removed TaskSet 45.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:28:59 INFO DAGScheduler: ResultStage 45 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:233) finished in 1.581 s
25/03/25 14:28:59 INFO DAGScheduler: Job 32 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:28:59 INFO TaskSchedulerImpl: Cancelling stage 45
25/03/25 14:28:59 INFO TaskSchedulerImpl: Killing all running tasks in stage 45: Stage finished
25/03/25 14:28:59 INFO DAGScheduler: Job 32 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:233, took 1.594110 s
25/03/25 14:28:59 INFO ClusterLoadMonitor: Removed query with execution ID:20. Current active queries:0
25/03/25 14:28:59 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:28:59 INFO QueryProfileListener: Query profile sent to logger, seq number: 20, app id: app-20250325142329-0000
25/03/25 14:28:59 INFO ClusterLoadMonitor: Added query with execution ID:21. Current active queries:1
25/03/25 14:28:59 INFO AdaptiveParallelism: Updating parallelism using instant cluster load. Old parallelism: 8, Total cores: 8, Current load: 1, Current Avg load: 1, New parallelism: 8
25/03/25 14:28:59 INFO QueryAnalyzedPlanSizeLogger$: Total number of expressions in the analyzed plan: 34, computed in 0 ms.
25/03/25 14:28:59 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:28:59 INFO AbstractParser: EXPERIMENTAL: Query cached 0 DFA states in the parser. Total cached DFA states: 84.Driver memory: 8041005056.
25/03/25 14:29:00 INFO SparkContext: Starting job: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:234
25/03/25 14:29:00 INFO DAGScheduler: Got job 33 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:234) with 1 output partitions
25/03/25 14:29:00 INFO DAGScheduler: Final stage: ResultStage 46 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:234)
25/03/25 14:29:00 INFO DAGScheduler: Parents of final stage: List()
25/03/25 14:29:00 INFO DAGScheduler: Missing parents: List()
25/03/25 14:29:00 INFO DAGScheduler: Submitting ResultStage 46 (MapPartitionsRDD[292] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:234), which has no missing parents
25/03/25 14:29:00 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 46 (MapPartitionsRDD[292] at wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:234) (first 15 tasks are for partitions Vector(0))
25/03/25 14:29:00 INFO TaskSchedulerImpl: Adding task set 46.0 with 1 tasks resource profile 0
25/03/25 14:29:00 INFO TaskSetManager: TaskSet 46.0 using PreferredLocationsV1
25/03/25 14:29:00 INFO FairSchedulableBuilder: Added task set TaskSet_46.0 tasks to pool 1742912591363
25/03/25 14:29:00 INFO TaskSetManager: Starting task 0.0 in stage 46.0 (TID 33) (10.139.64.15, executor 1, partition 0, PROCESS_LOCAL, 
25/03/25 14:29:00 INFO BlockManagerInfo: Added taskresult_33 in memory on 10.139.64.15:34809 (size: 3.3 MiB, free: 7.5 GiB)
25/03/25 14:29:00 INFO TransportClientFactory: Successfully created connection to /10.139.64.15:34809 after 5 ms (0 ms spent in bootstraps)
25/03/25 14:29:00 INFO TaskSetManager: Finished task 0.0 in stage 46.0 (TID 33) in 454 ms on 10.139.64.15 (executor 1) (1/1)
25/03/25 14:29:00 INFO TaskSchedulerImpl: Removed TaskSet 46.0, whose tasks have all completed, from pool 1742912591363
25/03/25 14:29:00 INFO DAGScheduler: ResultStage 46 (wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:234) finished in 0.464 s
25/03/25 14:29:00 INFO DAGScheduler: Job 33 is finished. Cancelling potential speculative or zombie tasks for this job
25/03/25 14:29:00 INFO TaskSchedulerImpl: Cancelling stage 46
25/03/25 14:29:00 INFO TaskSchedulerImpl: Killing all running tasks in stage 46: Stage finished
25/03/25 14:29:00 INFO DAGScheduler: Job 33 finished: wrapper at /root/.ipykernel/2754/command-1005462614165857-4139693859:234, took 0.468706 s
25/03/25 14:29:00 INFO ClusterLoadMonitor: Removed query with execution ID:21. Current active queries:0
25/03/25 14:29:00 INFO SQLExecution:  0 QueryExecution(s) are running
25/03/25 14:29:00 INFO BlockManagerInfo: Removed taskresult_33 on 10.139.64.15:34809 in memory (size: 3.3 MiB, free: 7.5 GiB)
25/03/25 14:29:00 INFO QueryProfileListener: Query profile sent to logger, seq number: 21, app id: app-20250325142329-0000
25/03/25 14:29:01 INFO ProgressReporter$: Removed result fetcher for 1742912591363_5574157718735884632_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:01 INFO ProgressReporter$: Added result fetcher for 1742912591363_4649240957407157915_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:02 INFO ProgressReporter$: Removed result fetcher for 1742912591363_4649240957407157915_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:02 INFO ProgressReporter$: Added result fetcher for 1742912591363_5916673610379477826_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:02 INFO PresignedUrlClientUtils$: FS_OP_CREATE FILE[https://dbstorageq6yipvwteyt4u.blob.core.windows.net/jobs/871544486122877/command-results/1005462614165857/d5aa6550-6ea4-45a6-8f7d-a3174dd52774] Presigned URL: Started uploading stream using AzureSasUri
25/03/25 14:29:02 INFO PresignedUrlClientUtils$: FS_OP_CREATE FILE[https://dbstorageq6yipvwteyt4u.blob.core.windows.net/jobs/871544486122877/command-results/1005462614165857/d5aa6550-6ea4-45a6-8f7d-a3174dd52774] executeHttpRequest finished with status code 201; numRetries = 0 upload_type=AzureSasUri operation_id=PresignedUrlUpload-8357fcdd6b920a74 method=PUT
25/03/25 14:29:02 INFO PresignedUrlClientUtils$: FS_OP_CREATE FILE[https://dbstorageq6yipvwteyt4u.blob.core.windows.net/jobs/871544486122877/command-results/1005462614165857/d5aa6550-6ea4-45a6-8f7d-a3174dd52774] Presigned URL: Successfully uploaded stream using AzureSasUri
25/03/25 14:29:02 INFO ProgressReporter$: Removed result fetcher for 1742912591363_5916673610379477826_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:02 INFO ProgressReporter$: Added result fetcher for 1742912591363_8552899029414702691_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:02 INFO ProgressReporter$: Removed result fetcher for 1742912591363_8552899029414702691_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:02 INFO ProgressReporter$: Added result fetcher for 1742912591363_7085886965375441773_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:02 INFO ProgressReporter$: Removed result fetcher for 1742912591363_7085886965375441773_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:02 INFO ProgressReporter$: Added result fetcher for 1742912591363_7926774835763575489_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:02 INFO ClusterLoadAvgHelper: Current cluster load: 0, Old Ema: 1.0, New Ema: 0.85 
25/03/25 14:29:02 INFO ProgressReporter$: Removed result fetcher for 1742912591363_7926774835763575489_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:02 INFO ProgressReporter$: Added result fetcher for 1742912591363_6345120645098231367_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:02 INFO ProgressReporter$: Removed result fetcher for 1742912591363_6345120645098231367_8fd7594d28474f7694181cce1425129b
25/03/25 14:29:05 INFO ClusterLoadAvgHelper: Current cluster load: 0, Old Ema: 0.85, New Ema: 0.0 
25/03/25 14:29:06 INFO ProgressReporter$: Added result fetcher for 1742912591363_5700713935565287307_5dcba0f12c4c4339b0364fbe858b4314
25/03/25 14:29:40 WARN DynamicSparkConfContextImpl: Ignored update because id 1742912438364 < 1742912438364; source: CONFIG_FILE
25/03/25 14:29:40 INFO DatabricksILoop$: Received SAFEr configs with version 1742912438364
25/03/25 14:30:13 INFO PythonDriverLocalBase$RedirectThread: Python RedirectThread exit
25/03/25 14:30:13 INFO PythonDriverLocalBase$RedirectThread: Python RedirectThread exit
25/03/25 14:30:13 INFO ReplCrashUtils$: python shell exit code: 137; replId: ReplId-195cd-af960-3, pid: 2754
25/03/25 14:30:13 INFO ReplCrashUtils$: strace is not enabled. To turn it on, set the Spark conf `spark.databricks.driver.strace.enabled` to true.
25/03/25 14:30:14 INFO MlflowAutologEventPublisher$: Subscriber with repl ID ReplId-195cd-af960-3 not responding to health checks, removing it
25/03/25 14:30:14 INFO ProgressReporter$: Removed result fetcher for 1742912591363_5700713935565287307_5dcba0f12c4c4339b0364fbe858b4314
25/03/25 14:30:14 INFO PythonDriverWrapper: Repl ReplInfo(driverReplId=ReplId-195cd-af960-3, chauffeurReplId=ReplId-195cd-af960-3,
 executionContextId=Some(ExecutionContextId(7744348879525460294)), lazyInfoInitialized=true) got an exception during execution
com.databricks.backend.common.rpc.SparkDriverExceptions$ReplStateException
	at com.databricks.backend.daemon.driver.JupyterKernelListener.waitForExecution(JupyterKernelListener.scala:1268)
	at com.databricks.backend.daemon.driver.JupyterKernelListener.executeCommand(JupyterKernelListener.scala:1314)
	at com.databricks.backend.daemon.driver.JupyterDriverLocal.executePython(JupyterDriverLocal.scala:1164)
	at com.databricks.backend.daemon.driver.JupyterDriverLocal.repl(JupyterDriverLocal.scala:1036)
	at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$33(DriverLocal.scala:1172)
	at com.databricks.unity.EmptyHandle$.runWith(UCSHandle.scala:133)
	at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$28(DriverLocal.scala:1163)
	at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:48)
	at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:276)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
	at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:272)
	at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:46)
	at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:43)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:96)
	at com.databricks.logging.AttributionContextTracing.withAttributionTags(AttributionContextTracing.scala:95)
	at com.databricks.logging.AttributionContextTracing.withAttributionTags$(AttributionContextTracing.scala:76)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:96)
	at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$1(DriverLocal.scala:1099)
	at com.databricks.backend.daemon.driver.DriverLocal$.$anonfun$maybeSynchronizeExecution$4(DriverLocal.scala:1519)
	at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:776)
	at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$2(DriverWrapper.scala:961)
	at scala.util.Try$.apply(Try.scala:213)
	at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$1(DriverWrapper.scala:950)
	at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$3(DriverWrapper.scala:996)
	at com.databricks.logging.UsageLogging.executeThunkAndCaptureResultTags$1(UsageLogging.scala:633)
	at com.databricks.logging.UsageLogging.$anonfun$recordOperationWithResultTags$4(UsageLogging.scala:656)
	at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:48)
	at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:276)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
	at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:272)
	at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:46)
	at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:43)
	at com.databricks.backend.daemon.driver.DriverWrapper.withAttributionContext(DriverWrapper.scala:75)
	at com.databricks.logging.AttributionContextTracing.withAttributionTags(AttributionContextTracing.scala:95)
	at com.databricks.logging.AttributionContextTracing.withAttributionTags$(AttributionContextTracing.scala:76)
	at com.databricks.backend.daemon.driver.DriverWrapper.withAttributionTags(DriverWrapper.scala:75)
	at com.databricks.logging.UsageLogging.recordOperationWithResultTags(UsageLogging.scala:628)
	at com.databricks.logging.UsageLogging.recordOperationWithResultTags$(UsageLogging.scala:537)
	at com.databricks.backend.daemon.driver.DriverWrapper.recordOperationWithResultTags(DriverWrapper.scala:75)
	at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:996)
	at com.databricks.backend.daemon.driver.DriverWrapper.executeCommandAndGetError(DriverWrapper.scala:746)
	at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:814)
	at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$runInnerLoop$1(DriverWrapper.scala:619)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:48)
	at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:276)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
	at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:272)
	at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:46)
	at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:43)
	at com.databricks.backend.daemon.driver.DriverWrapper.withAttributionContext(DriverWrapper.scala:75)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:619)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:541)
	at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:335)
	at java.lang.Thread.run(Thread.java:750)
25/03/25 14:30:14 INFO JupyterDriverLocal: restart JupyterDriverLocal repl ReplId-195cd-af960-3
25/03/25 14:30:14 ERROR WsfsHttpClient: Failed to get pid namespace id for 2754 with error java.nio.file.NoSuchFileException: /proc/2754/ns/pid
25/03/25 14:30:15 INFO JupyterDriverLocal: Starting gateway server for repl ReplId-195cd-af960-3
25/03/25 14:30:15 INFO PythonPy4JUtil: Using pinned thread mode in Py4J
25/03/25 14:30:15 INFO IpykernelUtils$: Python process builder: [bash, /local_disk0/.ephemeral_nfs/envs/pythonEnv-766b98c1-b1ee-476e-bf62-11f252eb65b8/python_start_notebook_scoped.sh, /databricks/spark/python/pyspark/wrapped_python.py, root, /local_disk0/.ephemeral_nfs/envs/pythonEnv-766b98c1-b1ee-476e-bf62-11f252eb65b8/bin/python, /databricks/python_shell/scripts/db_ipykernel_launcher.py, -f, /databricks/kernel-connections/0cb049fad654e7ea769f82ba10e370fadfbfa707a2ded14d155e525dc0e276b5.json]
25/03/25 14:30:15 INFO IpykernelUtils$: Cgroup isolation disabled, not placing python process with ReplId=ReplId-195cd-af960-3 in repl cgroup
25/03/25 14:30:20 INFO DAGScheduler: Asked to cancel job group 1742912591363_5700713935565287307_5dcba0f12c4c4339b0364fbe858b4314 with cancelFutureJobs=false
25/03/25 14:30:20 INFO JupyterDriverLocal: cancelled jobGroup:1742912591363_5700713935565287307_5dcba0f12c4c4339b0364fbe858b4314 
25/03/25 14:30:20 WARN DAGScheduler: Failed to cancel job group 1742912591363_5700713935565287307_5dcba0f12c4c4339b0364fbe858b4314. Cannot find active jobs for it.
25/03/25 14:30:20 WARN JupyterKernelListener: Received Jupyter debug message with unknown command: null
25/03/25 14:30:40 WARN DynamicSparkConfContextImpl: Ignored update because id 1742912438364 < 1742912438364; source: CONFIG_FILE
25/03/25 14:30:40 INFO DatabricksILoop$: Received SAFEr configs with version 1742912438364

@fabienarnaud
Copy link
Author

This is what I executed:

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants