-
Notifications
You must be signed in to change notification settings - Fork 20
/
RELEASE_NOTES
41 lines (35 loc) · 3.23 KB
/
RELEASE_NOTES
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---------------------------̣---
COMP SUPERSCALAR FRAMEWORK
------------------------------
COMP Superscalar (COMPSs) is a programming model that aims to ease the development of applications for distributed infrastructures, such as Clusters, Grids and Clouds. COMP Superscalar also features a runtime system that exploits the inherent parallelism of applications at execution time.
Release number: 3.3 (Orchid)
Release date: Nov-2023
-------------------------------
New features:
- New Jupyter kernel and JupyterLab extension to manage PyCOMPSs in the Jupyter ecosystem (https://github.com/bsc-wdc/jupyter-extension).
- Integration with Energy Aware Runtime (EAR) to obtain energy profiles in python-based applications (https://www.bsc.es/research-and-development/software-and-apps/software-list/ear-energy-management-framework-hpc).
- Support for users-defined dynamic constraints based on of task parameters values.
- GPU cache for PyTorch tensors.
Improvements:
- The support of interactive Python and Jupyter notebooks has been extended to work in non shared disk environment.
- Data transformations are supporting the data conversion to directory types.
- Workflow Provenance: new data persistence feature, new inputs and outputs terms to define data assets by hand, new sources term, improved common paths detection, and minimal YAML support.
- Configuration files for Leonardo and Galileo HPC systems.
- Several Bug fixes.
Known Limitations:
- Dynamic constrains are limited to task parameters declared as IN which are not future objects (generated by previous tasks).
- Issues when using tracing with Java 14+. For Java 17+ require to include this jvm flag "-Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true"
- Collections are not supported in http tasks.
- macOS support is limited to Java and Python without CPU affinity (requires to execute with --cpu_affinity=disable). Tracing is not available.
- Reduce operations can consume more disk space than the manually programmed n-ary reduction.
- Objects used as task parameters must be serializable.
- Tasks that invoke Numpy and MKL may experience issues if a different MKL threads count is used in different tasks. This is due to the fact that MKL reuses threads in the different calls and it does not change the number of threads from one call to another. This can be also happen with other libraries implemented with OpenMP. To fix these issues use the DLB option for in the cpu_affinity flag.
- C++ Objects declared as arguments in coarse-grain tasks must be passed as object pointers in order to have proper dependency management.
- Master as worker feature is not working for executions with persistent worker in C++.
- Coherence and concurrent writing in parameters annotated with the "Concurrent" direction must be managed by the underlying distributed storage system.
- Delete file calls for files used as input can produce a significant synchronization of the main code.
- Defining a parameter as OUT is only allowed for files and collections of objects with a default constructor.
For further information, please refer to the COMPSs Documentation at:
https://compss-doc.readthedocs.io/en/stable/
Please find more details about the COMP Superscalar framework at:
http://compss.bsc.es/