-
Notifications
You must be signed in to change notification settings - Fork 20
/
changelog
650 lines (556 loc) · 38 KB
/
changelog
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
COMP Superscalar Framework ChangeLog
Release number: 1.1.1
Release date: 1-Oct-2013
-------------------------------
This is the first public release of the COMP Superscalar Framework.
Release number: 1.1.2
Release date: 5-Jun-2014
-------------------------------
* C/C++ binding.
* Python binding.
* Integrated Development Environment for COMPSs applications (IDE)
* Priority tasks.
* New tracing system using the Extrae tool.
* Deleting a file within an OE removes all the replicas in the system.
* Updated the SSH Trilead adaptor libraries to remove unnecessary sleeps.
* Scripts for submission to queue systems (LSF, PBS, Slurm).
* Configuration of application directory and library path in project XML file.
* Separate logs for resubmitted / rescheduled tasks.
* Create a COMPSs sandbox in the workers instead of JavaGAT's.
Release number: 1.2
Release date: Nov-2014
-------------------------------
* N implementations for task methods, each with its own constraints.
* Constraint-aware resource management.
* Support for multicore tasks.
* Pluggable schedulers: facilitate the addition of new schedulers and policies.
* Extended support for objects in C/C++ binding.
* Extended IDE for N implementations and deployment through PMES.
* Update cloud connector for rOCCI to work with rocci-cli v4.2.5.
* Enhance rOCCI connector to compute the real VM creation times.
* Extended resources schema to support Virtual Appliances pricing.
* New LSF GAT adaptor.
* Deprecated Azure and EMOTIVE Cloud connectors.
* Deprecated Azure GAT adaptor.
Release number: 1.3
Release date: Nov-2015
-------------------------------
New features:
* Runtime:
- Persistent workers: workers can be deployed on computing nodes and persist during all the application lifetime, reducing runtime overhead.
Previous implementation of workers based on a per task process is still supported.
- Enhanced logging system
- Interoperable communication layer: different inter-nodes communication protocol is supported by implementing the Adaptor interface (JavaGAT
and NIO implementations already included)
- Simplified cloud connectors interface
- JClouds connector
* Python:
- Added constraints support
- Enhanced methods support
- Lists accepted as a tasks' parameter type
- Support for user decorators
* Tools:
- New monitoring tool: with new views, as workload and possibility of visualizing information about previous runs
- Enhanced Tracing mechanism
* Simplified execution scripts
* Simplified installation on Supercomputers
Known Limitations:
* Exceptions raised from tasks are not handled by the master
* Java tasks must be declared as public
* Java objects MUST be serializable or, at least, follow the java beans model
* Support limited to SOAP based services
* Persistent Workers do NOT isolate task executions in a sandbox
Release number: 1.4
Release date: April-2016
-------------------------------
New features:
* Runtime:
- Support for Dockers added
- Support for Chameleon added
- Object cache for persistent workers
- Improved error management
- Connector for submitting tasks to MN supercomputer from external COMPSs applications added
- Bug-fixes
* Python:
- Bug-fixes
* Tools:
- Enhanced Tracing mechanism:
· Reduced overhead using native java API
· Support for communications instrumentation added
· Support for PAPI hardware counters added
Known Limitations:
* When executing python applications with constraints in the cloud the initial VMs
must be set to 0
Release number: 2.0 Amapola (Poppy)
Release date: November-2016
-------------------------------
New features:
* Runtime:
- Upgrade to Java 8
- Support to remote input files (input files already at workers)
- Integration with Persistent Objects
- Elasticity with Docker and Mesos
- Multi-processor support (CPUs, GPUs, FPGAs)
- Dynamic constraints with environment variables
- Scheduling taking into account the full tasks graph (not only ready tasks)
- Support for SLURM clusters
- Initial COMPSs/OmpSs integration
- Replicated tasks: Tasks executed in all the workers
- Explicit Barrier
* Python:
- Python user events and HW counters tracing
- Improved PyCOMPSs serialization. Added support for lambda and generator parameters.
* C:
- Constraints support
* Tools:
- Improved current graph visualization on COMPSs Monitor
Improvements:
- Simplified Resource and Project files (NO retrocompatibility)
- Improved binding workers execution (use pipes instead of Java Process Builders)
- Simplifies cluster job scripts and supercomputers configuration
- Several bug fixes
Known Limitations:
* When executing python applications with constraints in the cloud the initial VMs
must be set to 0
Release number: 2.1 Bougainvillea
Release date: June-2017
-------------------------------
New features:
* Runtime:
- New annotations to simplify tasks that call external binaries
- Integration with other programming models (MPI, OmpSs,..)
- Support for Singularity containers in Clusters
- Extension of the scheduling to support multi-node tasks (MPI apps as tasks)
- Support for Grid Engine job scheduler in clusters
- Language flag automatically inferred in runcompss script
- New schedulers based on tasks’ generation order
- Core affinity and over-subscribing thread management in multi-core cluster queue scripts (used with MKL libraries, for example)
* Python:
- @local annotation to support simpler data synchronizations in master (requires to install guppy)
- Support for args and kwargs parameters as task dependencies
- Task versioning support in Python (multiple behaviors of the same task)
- New Python persistent workers that reduce overhead of Python tasks
- Support for task-thread affinity
- Tracing extended to support for Python user events and HW counters (with known issues)
* C:
- Extension of file management API (compss_fopen, compss_ifstream, compss_ofstream, compss_delete_file)
- Support for task-thread affinity
* Tools:
- Visualization of not-running tasks in current graph of the COMPSs Monitor
Improvements:
- Improved PyCOMPSs serialization
- Improvements in cluster job scripts and supercomputers configuration
- Several bug fixes
Known Limitations:
- When executing Python applications with constraints in the cloud the <InitialVMs> property must be set to 0
- Tasks that invoke Numpy and MKL may experience issues if tasks use a different number of MKL threads. This is due to
the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.
For further information, please refer to “COMPSs User Manual: Application development guide”.
Release number: 2.2 Camellia
Release date: November-2017
-------------------------------
New features:
* Runtime:
- Support Elasticity in SLURM-managed clusters
- Support for Elasticity with Singularity containers
- Integration of Decaf flows as COMPSs tasks
- Changed integratedtoolkit packages by es.bsc.compss (requires changes in Java application codes)
* Python:
- Support for Decaf applications as tasks
- External decorators (MPI, Binary, Decaf, etc.) extended with streams and prefixes support
- Added support for applications that use the argparse library
- Added support for dictionary unrolling on task call
* C:
- Persistent worker in C-binding (enabled with persistent_worker_c=true)
- Inter-task object cache
- Support for object methods as tasks
- Added support applications with threads in master code
Improvements:
- Integration with Jupyter-notebook improved
- Improved cleanup - Unused files removal
- Several bug fixes
Known Limitations:
- Tasks that invoke Numpy and MKL may experience issues if tasks use a different number of MKL threads. This is due to
the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.
Release number: 2.3 Daisy
Release date: June-2018
-------------------------------
New features:
* Runtime:
- Persistent storage API implementation based on Redis (distributed as default implementation with COMPSs)
- Support for FPGA constraints and reconfiguration scripts
- Support for PBS Job Scheduler and the Archer Supercomputer
* Java:
- New API call to delete objects in order to reduce application memory usage
* Python:
- Support for Python 3
- Support for Python virtual environments (venv)
- Support for running PyCOMPSs as a Python module
- Support for tasks returning multiple elements (returns=#)
- Automatic import of dummy PyCOMPSs API
* C:
- Persistent worker with Memory-to-memory transfers
- Support for arrays (no serialization required)
Improvements:
- Distribution with docker images
- Support for sharing objects in memory between tasks (no file serialization is required now with persistent workers)
- Source Code and example applications distribution on Github
- Automatic inference of task return
- Improved obsolete object cleanup
- Improved tracing support for applications using persistent memory
- Improved finalization process to reduce zombie processes
- Several bug fixes
Known Limitations:
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.
Release number: 2.4 Elderflower
Release date: November-2018
-------------------------------
New features:
* Runtime:
- New supercomputers supported Power9 (OpenPower) and ThunderX (ARM 64)
* Python:
- Autoparallel Module to automatically taskify affine loop nests
- Support for Python notebook execution in Supercomputers
- Distributed Data Set library that eases development of PyCOMPSs applications by distributing data, and/or providing most common data operations such as map, filter, reduce, etc.
* C:
- Multi-architecture and Cross-compiling build support
Improvements:
- New example applications distributed on Github
- Reduced overhead of c-binding
- Task sandbox reuse to reduce execution overheads
- Script to clean COMPSs zombie processes
- Several bug fixes
Known Limitations:
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.
- C++ Objects declared as arguments in a coarse-grain tasks must be passed in the task methods as object pointers in order to have a proper dependency management.
Release number: 2.5 Freesia
Release date: June-2019
-------------------------------
New features:
* Runtime:
- New task property "targetDirection" to indicate direction of the target object in object methods. Substitutes the "isModifier" task property.
- New Concurrent direction type for task parameter.
- Multi-node tasks support for native (Java, Python) tasks. Previously, multi-node tasks were only posible with @mpi or @decaf tasks.
- @Compss decorator for executing compss applications as tasks.
- New runtime api to synchronize files without opening them.
- Customizable task failure management with the "onFailure" task property.
- Enabled master node to execute tasks.
* Python:
- Partial support of numba in tasks.
- Support for collection as task parameter.
- Warnings for deprecated or incorrect task parameters.
- Supported task inheritance.
- New persistent MPI worker mode (alternative to subprocess).
- Support to ARM MAP and DDT tools (with MPI worker mode).
* C:
- Support for task without parameters and applications without src folder.
Improvements:
- Improvements in Jupyter for Supercomputers.
- Upgrade of runcompss_docker script to docker stack interface.
- Several bug fixes.
Known Limitations:
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.
- C++ Objects declared as arguments in a coarse-grain tasks must be passed in the task methods as object pointers in order to have a proper dependency management.
- Master as worker is not working for executions with persistent worker in C++.
- Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlaying distributed storage system.
- Delete file calls for files used as input can produce a significant synchronization of the main code.
Release number: 2.6 Gardenia
Release date: November-2019
-------------------------------
New features:
* Runtime:
- New Commutative direction type for task parameter. It indicates that the order of modifications done by tasks of the same type to the parameter does not affect the final result, i.e., tasks operating on a given commutative parameter can be executed in any order between them.
- New "Stream" parameter type, to enable the combination of data-flows and task-based workflows in the same application. The stream parameter type is defined to enable communication of streamed data between tasks.
- Timeout property for tasks. Tasks lasting more than their timeout will be cancelled and considered as failed. This property can be combined with the "onFailure" mechanism.
- Enable the definition of task groups.
- Support for throwing special exceptions (COMPSsExceptions) in tasks and catching them in task groups.
- Task cancellation management in the occurrence of a task group exception and unexpected application finalization (Exception or exit code different from 0)
* Python:
- Enable the declaration of a list of strings as a file collection task parameter.
- Support for Python MPI tasks
* C:
- Support for tasks with fine-grain parallelization with OmpSs-2 programming model.
Improvements:
- New multi-threaded ready scheduler with better scalability.
- Support for task with "isReplicated" properties and parameters with INOUT/OUT direction.
- Optimization in deletion of python objects to avoid large synchronizations in shared file systems.
- Improved the AutoParallel submodule to define data blocks as collection types.
- Several Bug fixes
Known Limitations:
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another.
- C++ Objects declared as arguments in coarse-grain tasks must be passed as object pointers in order to have proper dependency management.
- Master as worker is not working for executions with persistent worker in C++.
- Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlying distributed storage system.
- Delete file calls for files used as input can produce a significant synchronization of the main code.
- Defining a parameter as OUT is only allowed for files and collection files.
Release number: 2.7 Hyacinth
Release date: June-2020
-------------------------------
New features:
* Runtime:
- New trace events and cfg to see task constraints.
- New task constraint option for storage bandwidth.
- Directories as task dependency type.
- New IO tasks which can be overlapped with computational tasks.
- New "weight" parameter property to prioritise data in data location schedulers.
- New "keep rename" parameter property to enable/disable conversion to original names at worker.
- New API call to check if a file exists or it is going to be generated by a task.
- Support for MPI+OpenMP hybrid tasks.
* Python:
- Support for Python type hinting.
- Support for executing Jupyter notebooks in myBinder.
- PyCOMPSs player - a container based environment to easily install and use PyCOMPSs/COMPSs.
* DDS-2:
- Combination and execution of multiple operations within a single task.
- New methods and optimizations in DDS class.
Improvements:
- Change in MPI tasks syntax. ComputingNodes property has been changed to processes.
- New flags in enqueue_compss to provide a port range to deploy workers (--worker_port_range), generate a core dump (--gen_coredump) and allow JMX connections for JVM profiling (--jmx_port)
- Support for collections with direction OUT.
- Support for short labels in traces (Requires latest Paraver version for a correct visualization
- Avoid failure in compss_wait_on_file if the file has not been previously used by a task
- Support for task with "isReplicated" properties in shared file systems.
- Optimizations in data synchronizations in shared file systems.
- Support for TCS as job scheduler in clusters
- Configuration files for Salomon and Starlife clusters
- Several Bug fixes
Known Limitations:
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another. This can be also happen with other libraries implemented with OpenMP.
- C++ Objects declared as arguments in coarse-grain tasks must be passed as object pointers in order to have proper dependency management.
- Master as worker is not working for executions with persistent worker in C++.
- Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlying distributed storage system.
- Delete file calls for files used as input can produce a significant synchronization of the main code.
- Defining a parameter as OUT is only allowed for files and collection files.
Release number: 2.8 (Iris)
Release date: November-2020
-------------------------------
New features:
* Runtime:
- New @Container task annotation to allow the execution of a task inside a container.
- New IN_DELETE parameter direction type for one-use parameters. It deletes de element after task execution
- Support to Reductions
- Data layout for collections in MPI tasks. Allow to group elements of a collection to MPI processes.
- New cpu affinity and runtime events for a better performance analysis.
* Java:
- CPU Affinity in Java tasks.
* Python:
- Support for python dictionaries as collection parameters.
- Dummy implementations for the @binary and @mpi tasks to allow testing in sequential implementations.
- Allow tracing events at master user code.
- New python binding tracing events.
* C++:
- CPU Affinity in tasks executed with persistent executors.
* DDS:
- New methods and optimizations in DDS class.
Improvements:
- Change locality calculation from scheduling to location update.
- Some runtime file system operations removed from task execution critical path.
- Improvements in CPU-Task Executor affinity. Try executor tries to reuse its previous affinity.
- Support for Collection INOUT/OUT in Persistent storage executions.
- Improvements in tracing cfgs.
- Improvements of storage events in traces.
- Improvements in enqueue_compss and supercomputers cfg files semantics for shared and local disks.
- Adding NVRAM mode flag in enqueue_compss to allow initialization of nodes with specific NVRAM mode. Requires support for the cluster's resource manager.
- Configuration files for Irene and A64FX-based systems.
- Several Bug fixes.
Known Limitations:
- Objects used as task parameters must be serializable.
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another. This can be also happen with other libraries implemented with OpenMP.
- C++ Objects declared as arguments in coarse-grain tasks must be passed as object pointers in order to have proper dependency management.
- Master as worker is not working for executions with persistent worker in C++.
- Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlying distributed storage system.
- Delete file calls for files used as input can produce a significant synchronization of the main code.
- Defining a parameter as OUT is only allowed for files and collections of objects with a default constructor.
Release number: 2.9 (Jasmine)
Release date: June-2021
-------------------------------
New features
* Runtime:
- Support for nested tasks in agents deployment
- New application time out functionality to enable a controlled finalization of applications before the wall_clock_limit
- New flags to simplify application debugging (--keep_workingdir, --gen_coredump)
- Support for loading the application environment from scripts
- Support for tracing in agents environment.
- Partial support for OSX systems.
* Python:
- Support for Optional parameters and default values
- Enable monitoring task status from Jupyter notebooks.
- Enabling memory profile
- Python workers cache.
- Support for dynamic number of returns override at task invocation.
* DDS:
- New methods and optimizations in DDS class.
Improvements:
- Enabling passing extra flags for the queue system flags from enqueue_compss.
- Supporting multiple data layout in MPI tasks
- Improvements in tracing system. More events and cfg
- Enabling a flag to change extrae configuration file for python processes.
- Configuration files for Barbora system.
- Several Bug fixes.
Known Limitations:
- OSX support is limited to Java and Python2 without CPU affinity (require to execute with --cpu_affinity=disable)
- Reduce operations can consume more disk space than the manually programmed n-ary reduction
- Objects used as task parameters must be serializable.
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another. This can be also happen with other libraries implemented with OpenMP.
- C++ Objects declared as arguments in coarse-grain tasks must be passed as object pointers in order to have proper dependency management.
- Master as worker feature is not working for executions with persistent worker in C++.
- Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlying distributed storage system.
- Delete file calls for files used as input can produce a significant synchronization of the main code.
- Defining a parameter as OUT is only allowed for files and collections of objects with a default constructor.
Release number: 2.10 (Kumquat)
Release date: November-2021
-------------------------------
New features
* Runtime:
- Support for http requests (@http decorator).
- Support for Java versions higher than 8.
- Enable grouping processes in MPI tasks with the processes_per_node flag.
- Partial support for OSX systems
* Python:
- PyArrow object serialization support
- Cache profiling enabled
Improvements:
- Fixes in managing directory parameters.
- Improvements in asynchronous file system operations.
- Improvements visualizing collections in task dependency graphs.
- Improvements and fixes in the support for different MPI versions (COMPSS_MPIRUN_TYPE)
- Improvements in tracing system. Fixes in events and cfgs
- Configuration files for Karolina, Mahti and CTE-AMD system.
- Several Bug fixes.
Known Limitations:
- Collections are not supported in http tasks
- OSX support is limited to Java and Python without CPU affinity (require to execute with --cpu_affinity=disable). We have also detected issues when several python3 versions are installed in the system. Tracing is not available.
- Reduce operations can consume more disk space than the manually programmed n-ary reduction
- Objects used as task parameters must be serializable.
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another. This can be also happen with other libraries implemented with OpenMP.
- C++ Objects declared as arguments in coarse-grain tasks must be passed as object pointers in order to have proper dependency management.
- Master as worker feature is not working for executions with persistent worker in C++.
- Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlying distributed storage system.
- Delete file calls for files used as input can produce a significant synchronization of the main code.
- Defining a parameter as OUT is only allowed for files and collections of objects with a default constructor.
Release number: 3.0 (Lavender)
Release date: June-2022
-------------------------------
New features
- CLI to unify executions of application in different environents.
- Automatic creation of RO-Create descriptions from workflow executions.
- Transparent task-based checkpointing support.
- Support for MPMD MPI applications as tasks.
- Support for task epilog and prolog.
- Generic support for reusable descriptions of external software execution inside a COMPSs task (@Software).
- Mypy compilation of python binding.
- Integartion with DLB DROM forimproving affinity in OpenMP tasks.
- RISC-V 64bit support.
Deprecated Features:
- Python 2 support
- Autoparallel module (requires python2)
- SOAP Service tasks
Improvements:
- wait_on and wait_on_file API homogeneization.
- Improvements in the support for task nesting.
- Improvements in memory profiling reports.
- Improvements in tracing system: Offline tracing generation, and support for changes of working directory
- Configuration files for Nord3v2 and LaPalma system.
- Several Bug fixes.
Known Limitations:
- Issues when using tracing with Java 14+
- Collections are not supported in http tasks
- OSX support is limited to Java and Python without CPU affinity (require to execute with --cpu_affinity=disable). We have also detected issues when several python3 versions are installed in the system. Tracing is not available.
- Reduce operations can consume more disk space than the manually programmed n-ary reduction
- Objects used as task parameters must be serializable.
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another. This can be also happen with other libraries implemented with OpenMP. To fix these issues use the DLB option for in the cpu_affinity flag.
- C++ Objects declared as arguments in coarse-grain tasks must be passed as object pointers in order to have proper dependency management.
- Master as worker feature is not working for executions with persistent worker in C++.
- Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlying distributed storage system.
- Delete file calls for files used as input can produce a significant synchronization of the main code.
- Defining a parameter as OUT is only allowed for files and collections of objects with a default constructor.
Release number: 3.1 (Margarita)
Release date: November-2022
-------------------------------
New features
- Support for Julia applications as tasks
- Support for uDocker for container tasks.
- New decorator to apply a transformation to a parameter (@data_transformation)
- Automatic creation of Data Provenance information from Java applications.
- Constraint to force execution in the local agent. (is_local=True)
- Extended external software description (JSON file in Software decorator) to allow the definition of task parameters.
- Enable the specification of master working directory (location where data serializations are stored).
Improvements:
- Data Provenance: enhanced addition of source files using directories
- Fix issues when wait_on cancelled data. Get the latest version.
- Swap distutils with setuptools in the Python binding installation.
- Improvements in the management of tasks returning a modified parameter (a=task(a)).
- Container decorator allows to define flags to be passed to the container execution commands.
- Improvements in the support for agents and task nesting.
- Improvements in pluggable schedulers.
- Improvements in python cache.
- Improvements in tracing system: Fix assigned gpus events, include python cache events.
- Fix issues in Collection graph generation
- Configuration files for Hawk, Mahti, Dardel and Lenox system.
- Several Bug fixes.
Known Limitations:
- Issues when using tracing with Java 14+. For Java 17+ require to include this jvm flag "-Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true"
- Collections are not supported in http tasks.
- macOS support is limited to Java and Python without CPU affinity (require to execute with --cpu_affinity=disable). Tracing is not available.
- Reduce operations can consume more disk space than the manually programmed n-ary reduction.
- Objects used as task parameters must be serializable.
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another. This can be also happen with other libraries implemented with OpenMP. To fix these issues use the DLB option for in the cpu_affinity flag.
- C++ Objects declared as arguments in coarse-grain tasks must be passed as object pointers in order to have proper dependency management.
- Master as worker feature is not working for executions with persistent worker in C++.
- Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlying distributed storage system.
- Delete file calls for files used as input can produce a significant synchronization of the main code.
- Defining a parameter as OUT is only allowed for files and collections of objects with a default constructor.
Release number: 3.2 (Narcissus)
Release date: May-2023
-------------------------------
New features
- Support for Containers in MPI and MPMD tasks
- Cache for cuPy Objects in GPU tasks.
- New SSH/SCP Adaptor to submit tasks to remote clusters (GOS Adaptor).
Improvements:
- Workflow Provenance: support the new Workflow Run Crate profile (v0.1), improved the structuring of source files for the application, new term to specify a submitter, more details on the machine that run the workflow (architecture and COMPSs environment variables)
- Configuration files for Jusuf system.
- Several Bug fixes.
Known Limitations:
- Issues when using tracing with Java 14+. For Java 17+ require to include this jvm flag "-Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true"
- Collections are not supported in http tasks.
- macOS support is limited to Java and Python without CPU affinity (require to execute with --cpu_affinity=disable). Tracing is not available.
- Reduce operations can consume more disk space than the manually programmed n-ary reduction.
- Objects used as task parameters must be serializable.
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another. This can be also happen with other libraries implemented with OpenMP. To fix these issues use the DLB option for in the cpu_affinity flag.
- C++ Objects declared as arguments in coarse-grain tasks must be passed as object pointers in order to have proper dependency management.
- Master as worker feature is not working for executions with persistent worker in C++.
- Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlying distributed storage system.
- Delete file calls for files used as input can produce a significant synchronization of the main code.
- Defining a parameter as OUT is only allowed for files and collections of objects with a default constructor.
Release number: 3.3 (Orchid)
Release date: Nov-2023
-------------------------------
New features:
- New Jupyter kernel and JupyterLab extension to manage PyCOMPSs in the Jupyter ecosystem (https://github.com/bsc-wdc/jupyter-extension).
- Integration with Energy Aware Runtime (EAR) to obtain energy profiles in python-based applications (https://www.bsc.es/research-and-development/software-and-apps/software-list/ear-energy-management-framework-hpc).
- Support for users-defined dynamic constraints based on of task parameters values.
- GPU cache for PyTorch tensors.
Improvements:
- The support of interactive Python and Jupyter notebooks has been extended to work in non shared disk environment.
- Data transformations are supporting the data conversion to directory types.
- Workflow Provenance: new data persistence feature, new inputs and outputs terms to define data assets by hand, new sources term, improved common paths detection, and minimal YAML support.
- Configuration files for Leonardo and Galileo HPC systems.
- Several Bug fixes.
Known Limitations:
- Dynamic constrains are limited to task parameters declared as IN which are not future objects (generated by previous tasks).
- Issues when using tracing with Java 14+. For Java 17+ require to include this jvm flag "-Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true"
- Collections are not supported in http tasks.
- macOS support is limited to Java and Python without CPU affinity (requires to execute with --cpu_affinity=disable). Tracing is not available.
- Reduce operations can consume more disk space than the manually programmed n-ary reduction.
- Objects used as task parameters must be serializable.
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another. This can be also happen with other libraries implemented with OpenMP. To fix these issues use the DLB option for in the cpu_affinity flag.
- C++ Objects declared as arguments in coarse-grain tasks must be passed as object pointers in order to have proper dependency management.
- Master as worker feature is not working for executions with persistent worker in C++.
- Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlying distributed storage system.
- Delete file calls for files used as input can produce a significant synchronization of the main code.
- Defining a parameter as OUT is only allowed for files and collections of objects with a default constructor.
For further information, please refer to the COMPSs Documentation at:
https://compss-doc.readthedocs.io/en/stable/
Please find more details about the COMP Superscalar framework at:
http://compss.bsc.es/