diff --git a/doc/developer_guide/APIs.rst b/doc/developer_guide/APIs.rst
index bfc3458996..81c745a4a4 100644
--- a/doc/developer_guide/APIs.rst
+++ b/doc/developer_guide/APIs.rst
@@ -141,27 +141,6 @@ TBD
.. classname being provided. This allow them to instantiate the
.. appropriate objects without knowing what they are.
..
-.. gen_code()
-.. ++++++++++
-..
-.. All of the above classes (with the exception of PSy which supports a
-.. gen() method) have the gen_code() method. This method passes the
-.. parent of the generation tree and expect the object to add the code
-.. associated with the object as a child of the parent. The object is
-.. then expected to call any children. This approach is powerful as it
-.. lets each object concentrate on the code that it is responsible for.
-..
-.. Adding code in gen_code()
-.. +++++++++++++++++++++++++
-..
-.. The f2pygen classes have been developed to help create appropriate
-.. fortran code in the gen_code() method.
-..
-.. When writing a gen_code() method for a particular object and API it is
-.. natural to add code as a child of the parent provided by the callee of
-.. the method. However, in some cases we do not want code to appear at
-.. the current position in the hierarchy.
-..
.. The add() method
.. ++++++++++++++++
..
@@ -633,8 +612,7 @@ operator are computed redundantly in the halo up to depth-1 (see the
requires a check that any loop which includes a kernel that reads from
an operator is limited to iterating in the halo up to
depth-1. PSyclone will raise an exception if an optimisation attempts
-to increase the iteration space beyond this (see the ``gen_code()``
-method in the ``LFRicKern`` class).
+to increase the iteration space beyond this point.
To alleviate the above restriction one could add a configurable depth with
which to compute operators e.g. operators are always computed up to
@@ -1201,50 +1179,6 @@ the `w0` function space then at least one of the the `meta_arg`
arguments must be on the `w0` function space. However, this is not
checked in the current implementation.
-GOcean1.0
-=========
-
-TBD
-
-.. OpenMP Support
-.. --------------
-..
-.. Loop directives are treated as first class entities in the psyGen
-.. package. Therefore they can be added to psyGen's high level
-.. representation of the fortran code structure in the same way as calls
-.. and loops. Obviously it is only valid to add a loop directive outside
-.. of a loop.
-..
-.. When adding a call inside a loop the placement of any additional calls
-.. or declarations must be specified correctly to ensure that they are
-.. placed at the correct location in the hierarchy. To avoid accidentally
-.. splitting the loop directive from its loop the start_parent_loop()
-.. method can be used. This is available as a method in all fortran
-.. generation calls. *We could have placed it in psyGen instead of
-.. f2pygen*. This method returns the location at the top of any loop
-.. hierarchy and before any comments immediately before the top level
-.. loop.
-..
-.. The OpenMPLoopDirective object needs to know which variables are
-.. shared and which are private. In the current implementation default
-.. shared is used and private variables are listed. To determine the
-.. objects private variables the OpenMP implementation uses its internal
-.. xxx_get_private_list() method. This method first finds all loops
-.. contained within the directive and adds each loops variable name as a
-.. private variable. this method then finds all calls contained within
-.. the directive and adds each calls list of private variables, returned
-.. with the local_vars() method. Therefore the OpenMPLoopDirective object
-.. relies on calls specifying which variables they require being local.
-..
-.. Next ...
-..
-.. Update transformation for colours
-..
-.. OpenMPLoop transformation in transformations.py.
-..
-.. Create third transformation which goes over all loops in a schedule and
-.. applies the OpenMP loop transformation.
-
NEMO
====
diff --git a/doc/developer_guide/modules.rst b/doc/developer_guide/modules.rst
index 909b51cf84..1b17e05af2 100644
--- a/doc/developer_guide/modules.rst
+++ b/doc/developer_guide/modules.rst
@@ -40,67 +40,6 @@ Modules
This section describes the functionality of the various Python modules
that make up PSyclone.
-Module: f2pygen
-===============
-
-.. warning::
- The f2pygen functionality has been superseded by the development of
- the PSyIR and will be removed entirely in a future release.
-
-`f2pygen` provides functionality for generating Fortran code from
-scratch and supports the addition of a use statement to an existing
-parse tree.
-
-Variable Declarations
----------------------
-
-Three different classes are provided to support the creation of
-variable declarations (for intrinsic, character and derived-type
-variables). An example of their use might be:
-
->>> from psyclone.f2pygen import (ModuleGen, SubroutineGen, DeclGen,
-... CharDeclGen, TypeDeclGen)
->>> module = ModuleGen(name="testmodule")
->>> sub = SubroutineGen(module, name="testsubroutine")
->>> module.add(sub)
->>> sub.add(DeclGen(sub, datatype="integer", entity_decls=["my_int"]))
->>> sub.add(CharDeclGen(sub, length="10", entity_decls=["my_char"]))
->>> sub.add(TypeDeclGen(sub, datatype="field_type", entity_decls=["ufld"]))
->>> gen = str(module.root)
->>> print(gen)
- MODULE testmodule
- IMPLICIT NONE
- CONTAINS
- SUBROUTINE testsubroutine()
- TYPE(field_type) ufld
- CHARACTER(LEN=10) my_char
- INTEGER my_int
- END SUBROUTINE testsubroutine
- END MODULE testmodule
-
-The full interface to each of these classes is detailed below:
-
-.. autoclass:: psyclone.f2pygen.DeclGen
- :members:
- :noindex:
-
-.. autoclass:: psyclone.f2pygen.CharDeclGen
- :members:
- :noindex:
-
-.. autoclass:: psyclone.f2pygen.TypeDeclGen
- :members:
- :noindex:
-
-Adding code
------------
-
-`f2pygen` supports the addition of use statements to an existing
-`fparser1` parse tree:
-
-.. autofunction:: psyclone.f2pygen.adduse
-
-
.. _dev_configuration:
Module: configuration
@@ -252,7 +191,7 @@ Module: dynamo0p3
=================
Specialises various classes from the ``psyclone.psyGen`` module
-in order to support the Dynamo 0.3 API.
+in order to support the LFRic API.
When constructing the Fortran subroutine for either an Invoke or
Kernel stub (see :ref:`stub-generation`), there are various groups of
@@ -265,15 +204,15 @@ sub-class of the ``LFRicCollection`` abstract class:
:private-members:
:noindex:
-(A single base class is used for both Invokes and Kernel stubs since it
-allows the code dealing with variable declarations to be shared.)
+A single ``LFRicCollection`` class is used for both Invokes and Kernel stubs
+since it allows the code dealing with variable declarations to be shared.
A concrete sub-class of ``LFRicCollection`` must provide an
-implementation of the ``_invoke_declarations`` method. If the
+implementation of the ``invoke_declarations`` method. If the
quantities associated with the collection require initialisation
within the PSy layer then the ``initialise`` method must also be
implemented. If stub-generation is to be supported for kernels that
make use of the collection type then an implementation must also be
-provided for ``_stub_declarations.``
+provided for ``stub_declarations``.
Although instances of (sub-classes of) ``LFRicCollection`` handle all
declarations and initialisation, there remains the problem of
diff --git a/doc/developer_guide/psy_data.rst b/doc/developer_guide/psy_data.rst
index a1be49f7c4..ba5c60fdfc 100644
--- a/doc/developer_guide/psy_data.rst
+++ b/doc/developer_guide/psy_data.rst
@@ -461,7 +461,7 @@ The derived classes will typically control the behaviour
of ``PSyDataNode`` by providing additional parameters.
.. autoclass:: psyclone.psyir.nodes.PSyDataNode
- :members: gen_code
+ :members:
There are two ways of passing options to the
``PSyDataNode``. The first one is used to pass
@@ -488,8 +488,8 @@ can be somewhat cryptic due to the need to be unique).
The region name is validated by ``PSyDataTrans``, and
then passed to the node constructor. The ``PSyDataNode``
stores the name as an instance attribute, so that they can
-be used at code creation time (when ``gen_code`` is being
-called). Below is the list of all options that the PSyData
+be used at code creation time (PSyIR lowering).
+Below is the list of all options that the PSyData
node supports in the option dictionary:
.. table::
@@ -516,15 +516,15 @@ node supports in the option dictionary:
Passing Parameter From a Derived Node to the ``PSyDataNode``
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-The ``PSyDataTrans.gen_code`` function also accepts
-an option dictionary, which is used by derived nodes
-to control code creation. The ``gen_code`` function
-is called internally, not directly by the user. If
-the ``gen_code`` function of a node derived from
-``PSyDataNode`` is called, it can define this
+The ``PSyDataNode.lower_to_language_level`` function also accepts
+an option dictionary, which is used by derived nodes to control code
+creation.
+The ``lower_to_language_level`` function is called internally, not
+directly by the user. If the ``lower_to_language_level`` function of a
+node derived from ``PSyDataNode`` is called, it can define this
option directory to pass the parameters to the ``PSyDataNode``'s
-``gen_code`` function. Here are the options that are currently
-supported by ``PSyDataNode``:
+``lower_to_language_level`` function. Here are the options that are
+currently supported by ``PSyDataNode``:
================ =========================================
Parameter Name Description
@@ -554,9 +554,9 @@ for more details.
The kernel extraction node ``ExtractNode`` uses the dependency
module to determine which variables are input- and output-parameters,
-and provides these two lists to the ``gen_code()`` function of its base class,
-a ``PSyDataNode`` node. It also uses the ``post_var_postfix`` option
-as described under ``gen_code()`` above (see also
+and provides these two lists to the ````lower_to_language_level``()`` function
+of its base class, a ``PSyDataNode`` node. It also uses the ``post_var_postfix``
+option as described under ``lower_to_language_level``()`` above (see also
:ref:`user_guide:extraction_libraries`).
.. _psydata_base_class:
diff --git a/doc/developer_guide/psyir.rst b/doc/developer_guide/psyir.rst
index 17ed5f5b2c..b93b079643 100644
--- a/doc/developer_guide/psyir.rst
+++ b/doc/developer_guide/psyir.rst
@@ -700,8 +700,7 @@ psyclone.psyir.nodes.html#psyclone.psyir.nodes.Directive`.
.. warning::
Some parts of some Clauses are still under development, and not all clauses
are encoded in Clauses classes yet (for example OpenACC clauses). These
- clause strings are instead generated inside the ``begin_string`` or
- ``gen_code`` methods during code generation.
+ clause strings are instead generated inside the ``begin_string``.
.. _named_arguments-label:
@@ -991,9 +990,7 @@ The Kernel-layer subclasses will be used to:
translated into LFRic PSyIR using the expected datatypes as
specified by the kernel metadata and associated LFRic rules.
-3) replace the existing kernel stub generation implementation so that
- the PSyIR back ends can be used and PSyclone will rely less on
- ``f2pygen`` and ``fparser1``. At the moment ``kernel_interface``
+3) At the moment ``kernel_interface``
provides the same functionality as ``kern_stub_arg_list``, except
that it uses the symbol table (which keeps datatypes and their
declarations together).
@@ -1087,9 +1084,8 @@ correspond to and how the arguments relate to each other (they just
output strings).
The logic and declaration of kernel variables is handled separately by
-the ``gen_stub`` method in ``LFRicKern`` and the ``gen_code`` method in
-``LFRicInvoke``. In both cases these methods make use of the subclasses
-of ``LFRicCollection`` to declare variables.
+the ``stub_declarations`` and ``invoke_declarations`` methods in the
+appropriate ``LFRicCollection``.
When using the symbol table in the LFRic PSyIR we naturally capture
arguments and datatypes together. The ``KernelInterface`` class is
diff --git a/doc/user_guide/dynamo0p3_topclasses.dot b/doc/user_guide/dynamo0p3_topclasses.dot
index d9e26a15b1..1a471428c1 100644
--- a/doc/user_guide/dynamo0p3_topclasses.dot
+++ b/doc/user_guide/dynamo0p3_topclasses.dot
@@ -2,14 +2,14 @@ digraph "classes_dynamo0p3" {
charset="utf-8"
rankdir=BT
-"20" [label="{LFRicInvoke|\l|arg_for_funcspace()\lfield_on_space()\lgen_code()\lis_coloured()\lunique_fss()\lunique_proxy_declarations()\l}", shape="record"];
+"20" [label="{LFRicInvoke|\l|arg_for_funcspace()\lfield_on_space()\lis_coloured()\lunique_fss()\lunique_proxy_declarations()\l}", shape="record"];
"21" [label="{LFRicInvokeSchedule|\l|view()\l}", shape="record"];
"35" [label="{DynamoInvokes|\l|}", shape="record"];
"36" [label="{LFRicPSy|\l|}", shape="record"];
-"45" [label="{Invoke|\l|first_access()\lgen()\lgen_code()\lunique_declarations()\lunique_declns_by_intent()\l}", shape="record", style=filled, fillcolor="antiquewhite"];
-"46" [label="{InvokeSchedule|\l|gen_code()\lview()\l}", shape="record"];
-"47" [label="{Invokes|\l|gen_code()\lgen_ocl_init()\lget()\l}", shape="record", style=filled, fillcolor="antiquewhite"];
-"54" [label="{Node|\l|addchild()\lancestor()\lbackward_dependence()\lcalls()\ldag()\ldag_gen()\lfollowing()\lforward_dependence()\lgen_c_code()\lgen_code()\lindent()\lis_openmp_parallel()\lis_valid_location()\lkern_calls()\llist()\llist_to_string()\lloops()\lpreceding()\lreductions()\lsameParent()\lsameRoot()\lupdate()\lview()\lwalk()\l}", shape="record"];
+"45" [label="{Invoke|\l|first_access()\lgen()\lunique_declarations()\lunique_declns_by_intent()\l}", shape="record", style=filled, fillcolor="antiquewhite"];
+"46" [label="{InvokeSchedule|\l|view()\l}", shape="record"];
+"47" [label="{Invokes|\l|gen_ocl_init()\lget()\l}", shape="record", style=filled, fillcolor="antiquewhite"];
+"54" [label="{Node|\l|addchild()\lancestor()\lbackward_dependence()\lcalls()\ldag()\ldag_gen()\lfollowing()\lforward_dependence()\lgen_c_code()\lindent()\lis_openmp_parallel()\lis_valid_location()\lkern_calls()\llist()\llist_to_string()\lloops()\lpreceding()\lreductions()\lsameParent()\lsameRoot()\lupdate()\lview()\lwalk()\l}", shape="record"];
"55" [label="{PSy|\l|inline()\l}", shape="record", style=filled, fillcolor="antiquewhite"];
"56" [label="{Schedule|\l|view()\l}", shape="record", style=filled, fillcolor="antiquewhite"];
diff --git a/doc/user_guide/dynamo0p3_topclasses.svg b/doc/user_guide/dynamo0p3_topclasses.svg
index bc6dc3ed01..ff78c48d91 100644
--- a/doc/user_guide/dynamo0p3_topclasses.svg
+++ b/doc/user_guide/dynamo0p3_topclasses.svg
@@ -18,7 +18,6 @@
arg_for_funcspace()
field_on_space()
-gen_code()
is_coloured()
unique_fss()
unique_proxy_declarations()
@@ -48,7 +47,6 @@
first_access()
gen()
-gen_code()
unique_declarations()
unique_declns_by_intent()
@@ -81,7 +79,6 @@
InvokeSchedule
-gen_code()
view()
@@ -113,7 +110,6 @@
Invokes
-gen_code()
gen_ocl_init()
get()
@@ -169,7 +165,6 @@
following()
forward_dependence()
gen_c_code()
-gen_code()
indent()
is_openmp_parallel()
is_valid_location()
diff --git a/doc/user_guide/gocean1p0.rst b/doc/user_guide/gocean1p0.rst
index ca9b9f7d7f..cee53ed06b 100644
--- a/doc/user_guide/gocean1p0.rst
+++ b/doc/user_guide/gocean1p0.rst
@@ -650,9 +650,7 @@ Rules
#####
Kernel arguments follow a set of rules which have been specified for
-the GOcean 1.0 API. These rules are encoded in the ``gen_code()``
-method of the ``GOKern`` class in the ``gocean1p0.py`` file. The
-rules, along with PSyclone's naming conventions, are:
+the GOcean 1.0 API. The rules, along with PSyclone's naming conventions, are:
1) Every kernel has the indices of the current grid point as the first two arguments, ``i`` and ``j``. These are integers and have intent ``in``.
diff --git a/doc/user_guide/psyke.rst b/doc/user_guide/psyke.rst
index 2132429cdb..b7769d2c8f 100644
--- a/doc/user_guide/psyke.rst
+++ b/doc/user_guide/psyke.rst
@@ -224,8 +224,8 @@ PSyclone modifies the Schedule of the selected ``invoke_0``:
Schedule[invoke='invoke_0' dm=False]
0: Loop[type='dofs',field_space='any_space_1',it_space='dofs',
upper_bound='ndofs']
- Literal[value:'NOT_INITIALISED']
- Literal[value:'NOT_INITIALISED']
+ Reference[name:'loop0_start']
+ Reference[name:'loop0_stop']
Literal[value:'1']
Schedule[]
0: BuiltIn setval_c(f5,0.0)
diff --git a/examples/lfric/README.md b/examples/lfric/README.md
index 153152599e..59eefd7da2 100644
--- a/examples/lfric/README.md
+++ b/examples/lfric/README.md
@@ -245,14 +245,14 @@ Kernel call 'matrix_vector_code' was found in
InvokeSchedule[invoke='invoke_0', dm=False]
...
Loop[type='', field_space='any_space_1', it_space='cells', upper_bound='ncells']
- Literal[value:'NOT_INITIALISED']
- Literal[value:'NOT_INITIALISED']
+ Reference[name:'loop0_start']
+ Reference[name:'loop0_stop']
Literal[value:'1']
Schedule[]
CodedKern matrix_vector_kernel_code(m_lumped,ones,mb) [module_inline=False]
Loop[type='dofs', field_space='any_space_1', it_space='dofs', upper_bound='ndofs']
- Literal[value:'NOT_INITIALISED']
- Literal[value:'NOT_INITIALISED']
+ Reference[name:'loop1_start']
+ Reference[name:'loop1_stop']
Literal[value:'1']
Schedule[]
BuiltIn x_divideby_y(self_mb_lumped_inv,ones,m_lumped)
diff --git a/src/psyclone/alg_gen.py b/src/psyclone/alg_gen.py
index b5b37e1ee7..9e70e7b043 100644
--- a/src/psyclone/alg_gen.py
+++ b/src/psyclone/alg_gen.py
@@ -227,10 +227,6 @@ def _adduse(location, name, only=None, funcnames=None):
tree. This will be added at the first valid location before the
current location.
- This function should be part of the fparser2 replacement for
- f2pygen (which uses fparser1) but is kept here until this is
- developed, see issue #240.
-
:param location: the current location (node) in the parse tree to which \
to add a USE.
:type location: :py:class:`fparser.two.utils.Base`
diff --git a/src/psyclone/core/component_indices.py b/src/psyclone/core/component_indices.py
index 881e8c25f6..dfbcdf3d8a 100644
--- a/src/psyclone/core/component_indices.py
+++ b/src/psyclone/core/component_indices.py
@@ -36,8 +36,6 @@
'''This module provides a class to manage indices in variable accesses.'''
-from __future__ import print_function, absolute_import
-
from psyclone.errors import InternalError
diff --git a/src/psyclone/domain/common/psylayer/psyloop.py b/src/psyclone/domain/common/psylayer/psyloop.py
index b419febd39..321f580229 100644
--- a/src/psyclone/domain/common/psylayer/psyloop.py
+++ b/src/psyclone/domain/common/psylayer/psyloop.py
@@ -330,20 +330,6 @@ def args_filter(self, arg_types=None, arg_accesses=None, unique=False):
all_args.extend(call_args)
return all_args
- def gen_mark_halos_clean_dirty(self, parent):
- '''
- Generates the necessary code to mark halo regions as clean or dirty
- following execution of this loop. This default implementation does
- nothing.
-
- TODO #1648 - this method should be removed when the corresponding
- one in LFRicLoop is removed.
-
- :param parent: the node in the f2pygen AST to which to add content.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
-
- '''
-
def _halo_read_access(self, arg):
'''Determines whether the supplied argument has (or might have) its
halo data read within this loop. Returns True if it does, or if
diff --git a/src/psyclone/domain/lfric/arg_ordering.py b/src/psyclone/domain/lfric/arg_ordering.py
index ce00697750..abba84c99a 100644
--- a/src/psyclone/domain/lfric/arg_ordering.py
+++ b/src/psyclone/domain/lfric/arg_ordering.py
@@ -51,7 +51,7 @@
MetadataToArgumentsRules)
from psyclone.errors import GenerationError, InternalError
from psyclone.psyir.nodes import ArrayReference, Reference
-from psyclone.psyir.symbols import ScalarType
+from psyclone.psyir.symbols import DataSymbol, ArrayType
class ArgOrdering:
@@ -102,10 +102,11 @@ def _symtab(self):
'''
if self._forced_symtab:
return self._forced_symtab
- elif self._kern and self._kern.ancestor(psyGen.InvokeSchedule):
- return self._kern.ancestor(psyGen.InvokeSchedule).symbol_table
- else:
- return LFRicSymbolTable()
+ if self._kern and self._kern.ancestor(psyGen.InvokeSchedule):
+ # _kern may be outdated, so go back up to the invoke first
+ current_invoke = self._kern.ancestor(psyGen.InvokeSchedule).invoke
+ return current_invoke.schedule.symbol_table
+ return LFRicSymbolTable()
def psyir_append(self, node):
'''Appends a PSyIR node to the PSyIR argument list.
@@ -197,13 +198,25 @@ def append_integer_reference(self, name, tag=None):
:rtype: :py:class:`psyclone.psyir.symbols.Symbol`
'''
+ # pylint: disable=import-outside-toplevel
+ from psyclone.domain.lfric import LFRicTypes
if tag is None:
tag = name
- sym = self._symtab.find_or_create_integer_symbol(name, tag)
+ else:
+ # If it has a tag, first try to look up for it
+ try:
+ sym = self._symtab.lookup_with_tag(tag)
+ self.psyir_append(Reference(sym))
+ return sym
+ except KeyError:
+ pass
+ sym = self._symtab.find_or_create(
+ name, tag=tag, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
self.psyir_append(Reference(sym))
return sym
- def get_array_reference(self, array_name, indices, intrinsic_type,
+ def get_array_reference(self, array_name, indices, intrinsic_type=None,
tag=None, symbol=None):
# pylint: disable=too-many-arguments
'''This function creates an array reference. If there is no symbol
@@ -215,9 +228,10 @@ def get_array_reference(self, array_name, indices, intrinsic_type,
:param indices: the indices to be used in the PSyIR reference. It \
must either be ":", or a PSyIR node.
:type indices: List[Union[str, py:class:`psyclone.psyir.nodes.Node`]]
- :param intrinsic_type: the intrinsic type of the array.
+ :param intrinsic_type: the intrinsic type of the array. Defaults to
+ LFRicIntegerScalarDataType.
:type intrinsic_type: \
- :py:class:`psyclone.psyir.symbols.datatypes.ScalarType.Intrinsic`
+ Optional[:py:class:`psyclone.psyir.symbols.datatypes.ScalarType`]
:param tag: optional tag for the symbol.
:type tag: Optional[str]
:param symbol: optional the symbol to use.
@@ -229,11 +243,17 @@ def get_array_reference(self, array_name, indices, intrinsic_type,
'''
if not tag:
tag = array_name
+ if intrinsic_type is None:
+ # pylint: disable=import-outside-toplevel
+ from psyclone.domain.lfric import LFRicTypes
+ intrinsic_type = LFRicTypes("LFRicIntegerScalarDataType")()
+
if not symbol:
- symbol = self._symtab.find_or_create_array(array_name,
- len(indices),
- intrinsic_type,
- tag)
+ symbol = self._symtab.find_or_create(
+ array_name, tag=tag, symbol_type=DataSymbol,
+ datatype=ArrayType(
+ intrinsic_type,
+ [ArrayType.Extent.DEFERRED for _ in indices]))
else:
if symbol.name != array_name:
raise InternalError(f"Specified symbol '{symbol.name}' has a "
@@ -248,7 +268,7 @@ def get_array_reference(self, array_name, indices, intrinsic_type,
ref = ArrayReference.create(symbol, indices)
return ref
- def append_array_reference(self, array_name, indices, intrinsic_type,
+ def append_array_reference(self, array_name, indices, intrinsic_type=None,
tag=None, symbol=None):
# pylint: disable=too-many-arguments
'''This function adds an array reference. If there is no symbol with
@@ -263,7 +283,7 @@ def append_array_reference(self, array_name, indices, intrinsic_type,
:type indices: List[Union[str, py:class:`psyclone.psyir.nodes.Node`]]
:param intrinsic_type: the intrinsic type of the array.
:type intrinsic_type: \
- :py:class:`psyclone.psyir.symbols.datatypes.ScalarType.Intrinsic`
+ Optional[:py:class:`psyclone.psyir.symbols.datatypes.ScalarType`]
:param tag: optional tag for the symbol.
:type tag: Optional[str]
:param symbol: optional the symbol to use.
@@ -915,8 +935,7 @@ def banded_dofmap(self, function_space, var_accesses=None):
# to the argument list as they are mandatory for every function
# space that appears in the meta-data.
sym = self.append_array_reference(
- function_space.cbanded_map_name, indices=[":", ":"],
- intrinsic_type=ScalarType.Intrinsic.INTEGER)
+ function_space.cbanded_map_name, indices=[":", ":"])
self.append(sym.name, var_accesses)
def indirection_dofmap(self, function_space, operator=None,
@@ -937,8 +956,7 @@ def indirection_dofmap(self, function_space, operator=None,
'''
# pylint: disable=unused-argument
map_name = function_space.cma_indirection_map_name
- self.append_array_reference(map_name, [":"],
- ScalarType.Intrinsic.INTEGER, tag=map_name)
+ self.append_array_reference(map_name, [":"], tag=map_name)
self.append(map_name, var_accesses)
def ref_element_properties(self, var_accesses=None):
diff --git a/src/psyclone/domain/lfric/kern_call_arg_list.py b/src/psyclone/domain/lfric/kern_call_arg_list.py
index 7011a37155..646688e8c2 100644
--- a/src/psyclone/domain/lfric/kern_call_arg_list.py
+++ b/src/psyclone/domain/lfric/kern_call_arg_list.py
@@ -43,9 +43,10 @@
'''
from dataclasses import dataclass
+from typing import Optional, Tuple
from psyclone import psyGen
-from psyclone.core import AccessType, Signature
+from psyclone.core import AccessType, Signature, VariablesAccessInfo
from psyclone.domain.lfric.arg_ordering import ArgOrdering
from psyclone.domain.lfric.lfric_constants import LFRicConstants
# Avoid circular import:
@@ -55,7 +56,8 @@
ArrayReference, Reference, StructureReference)
from psyclone.psyir.symbols import (
DataSymbol, DataTypeSymbol, UnresolvedType, ContainerSymbol,
- ImportInterface, ScalarType)
+ ImportInterface, ScalarType, ArrayType, UnsupportedFortranType,
+ ArgumentInterface)
# psyir has classes created at runtime
# pylint: disable=no-member
@@ -122,12 +124,12 @@ def get_user_type(self, module_name, user_type, name, tag=None):
try:
# Check if the module is already declared:
module = self._symtab.lookup(module_name)
+ # Get the symbol table in which the module is declared:
+ mod_sym_tab = module.find_symbol_table(self._kern)
except KeyError:
module = self._symtab.new_symbol(module_name,
symbol_type=ContainerSymbol)
-
- # Get the symbol table in which the module is declared:
- mod_sym_tab = module.find_symbol_table(self._kern)
+ mod_sym_tab = self._symtab
# The user-defined type must be declared in the same symbol
# table as the container (otherwise errors will happen later):
@@ -138,9 +140,9 @@ def get_user_type(self, module_name, user_type, name, tag=None):
interface=ImportInterface(module))
# Declare the actual user symbol in the local symbol table, using
# the datatype from the root table:
- sym = self._symtab.new_symbol(name, tag=tag,
- symbol_type=DataSymbol,
- datatype=user_type_symbol)
+ sym = self._symtab.find_or_create(name, tag=tag,
+ symbol_type=DataSymbol,
+ datatype=user_type_symbol)
return sym
def append_structure_reference(self, module_name, user_type, member_list,
@@ -207,8 +209,7 @@ def cell_map(self, var_accesses=None):
# Add the cell map to our argument list
cell_ref_name, cell_ref = self.cell_ref_name(var_accesses)
- sym = self.append_array_reference(base_name, [":", ":", cell_ref],
- ScalarType.Intrinsic.INTEGER)
+ sym = self.append_array_reference(base_name, [":", ":", cell_ref])
self.append(f"{sym.name}(:,:,{cell_ref_name})",
var_accesses=var_accesses, var_access_name=sym.name)
@@ -331,12 +332,19 @@ def cma_operator(self, arg, var_accesses=None):
# REAL(KIND=r_solver), pointer:: cma_op1_matrix(:,:,:)
# = > null()
mode = arg.access
- sym = self._symtab.lookup_with_tag(f"{arg.name}:{suffix}")
+ sym = self._symtab.find_or_create_tag(
+ f"{arg.name}:{suffix}", arg.name,
+ symbol_type=DataSymbol, datatype=UnresolvedType(),
+ )
self.psyir_append(ArrayReference.create(sym, [":", ":", ":"]))
else:
# All other variables are scalar integers
- name = self._symtab.lookup_with_tag(
- f"{arg.name}:{component}:{suffix}").name
+ name = self._symtab.find_or_create_tag(
+ f"{arg.name}:{component}:{suffix}", arg.name,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")(),
+ interface=ArgumentInterface(ArgumentInterface.Access.READ)
+ ).name
mode = AccessType.READ
sym = self.append_integer_reference(
name, tag=f"{arg.name}:{component}:{suffix}")
@@ -444,7 +452,6 @@ def stencil_unknown_extent(self, arg, var_accesses=None):
var_sym = LFRicStencils.dofmap_size_symbol(self._symtab, arg)
cell_name, cell_ref = self.cell_ref_name(var_accesses)
self.append_array_reference(var_sym.name, [cell_ref],
- ScalarType.Intrinsic.INTEGER,
symbol=var_sym)
self.append(f"{var_sym.name}({cell_name})", var_accesses,
var_access_name=var_sym.name)
@@ -469,7 +476,6 @@ def stencil_2d_unknown_extent(self, arg, var_accesses=None):
var_sym = LFRicStencils.dofmap_size_symbol(self._symtab, arg)
cell_name, cell_ref = self.cell_ref_name(var_accesses)
self.append_array_reference(var_sym.name, [":", cell_ref],
- ScalarType.Intrinsic.INTEGER,
symbol=var_sym)
name = f"{var_sym.name}(:,{cell_name})"
self.append(name, var_accesses, var_access_name=var_sym.name)
@@ -541,7 +547,6 @@ def stencil(self, arg, var_accesses=None):
var_sym = LFRicStencils.dofmap_symbol(self._symtab, arg)
cell_name, cell_ref = self.cell_ref_name(var_accesses)
self.append_array_reference(var_sym.name, [":", ":", cell_ref],
- ScalarType.Intrinsic.INTEGER,
symbol=var_sym)
self.append(f"{var_sym.name}(:,:,{cell_name})", var_accesses,
var_access_name=var_sym.name)
@@ -574,7 +579,6 @@ def stencil_2d(self, arg, var_accesses=None):
cell_name, cell_ref = self.cell_ref_name(var_accesses)
self.append_array_reference(var_sym.name,
[":", ":", ":", cell_ref],
- ScalarType.Intrinsic.INTEGER,
symbol=var_sym)
name = f"{var_sym.name}(:,:,:,{cell_name})"
self.append(name, var_accesses, var_access_name=var_sym.name)
@@ -657,17 +661,26 @@ def fs_compulsory_field(self, function_space, var_accesses=None):
self.append(sym.name, var_accesses)
map_name = function_space.map_name
+ intrinsic_type = LFRicTypes("LFRicIntegerScalarDataType")()
+ dtype = UnsupportedFortranType(
+ f"{intrinsic_type.intrinsic.name}("
+ f"kind={intrinsic_type.precision.name}), pointer, "
+ f"dimension(:,:) :: {map_name} => null()",
+ partial_datatype=ArrayType(
+ intrinsic_type,
+ [ArrayType.Extent.DEFERRED, ArrayType.Extent.DEFERRED]))
+ sym = self._symtab.find_or_create_tag(
+ map_name, symbol_type=DataSymbol, datatype=dtype)
+
if self._kern.iterates_over == 'domain':
# This kernel takes responsibility for iterating over cells so
# pass the whole dofmap.
- sym = self.append_array_reference(map_name, [":", ":"],
- ScalarType.Intrinsic.INTEGER)
+ self.append_array_reference(map_name, [":", ":"], symbol=sym)
self.append(sym.name, var_accesses, var_access_name=sym.name)
else:
# Pass the dofmap for the cell column
cell_name, cell_ref = self.cell_ref_name(var_accesses)
- sym = self.append_array_reference(map_name, [":", cell_ref],
- ScalarType.Intrinsic.INTEGER)
+ self.append_array_reference(map_name, [":", cell_ref], symbol=sym)
self.append(f"{sym.name}(:,{cell_name})",
var_accesses, var_access_name=sym.name)
@@ -693,8 +706,7 @@ def fs_intergrid(self, function_space, var_accesses=None):
sym = self.append_integer_reference(function_space.undf_name)
self.append(sym.name, var_accesses)
map_name = function_space.map_name
- sym = self.append_array_reference(map_name, [":", ":"],
- ScalarType.Intrinsic.INTEGER)
+ sym = self.append_array_reference(map_name, [":", ":"])
self.append(sym.name, var_accesses)
else:
# For the coarse mesh we only need undf and the dofmap for
@@ -717,8 +729,10 @@ def basis(self, function_space, var_accesses=None):
'''
for rule in self._kern.qr_rules.values():
basis_name = function_space.get_basis_name(qr_var=rule.psy_name)
- sym = self.append_array_reference(basis_name, [":", ":", ":", ":"],
- ScalarType.Intrinsic.REAL)
+ sym = self.append_array_reference(
+ basis_name, [":", ":", ":", ":"],
+ LFRicTypes("LFRicRealScalarDataType")()
+ )
self.append(sym.name, var_accesses)
if "gh_evaluator" in self._kern.eval_shapes:
@@ -730,8 +744,7 @@ def basis(self, function_space, var_accesses=None):
# function space
fspace = self._kern.eval_targets[fs_name][0]
basis_name = function_space.get_basis_name(on_space=fspace)
- sym = self.append_array_reference(basis_name, [":", ":", ":"],
- ScalarType.Intrinsic.REAL)
+ sym = self.append_array_reference(basis_name, [":", ":", ":"])
self.append(sym.name, var_accesses)
def diff_basis(self, function_space, var_accesses=None):
@@ -751,9 +764,11 @@ def diff_basis(self, function_space, var_accesses=None):
for rule in self._kern.qr_rules.values():
diff_basis_name = function_space.get_diff_basis_name(
qr_var=rule.psy_name)
- sym = self.append_array_reference(diff_basis_name,
- [":", ":", ":", ":"],
- ScalarType.Intrinsic.REAL)
+ sym = self.append_array_reference(
+ diff_basis_name,
+ [":", ":", ":", ":"],
+ LFRicTypes("LFRicRealScalarDataType")()
+ )
self.append(sym.name, var_accesses)
if "gh_evaluator" in self._kern.eval_shapes:
@@ -766,9 +781,10 @@ def diff_basis(self, function_space, var_accesses=None):
fspace = self._kern.eval_targets[fs_name][0]
diff_basis_name = function_space.get_diff_basis_name(
on_space=fspace)
- sym = self.append_array_reference(diff_basis_name,
- [":", ":", ":"],
- ScalarType.Intrinsic.REAL)
+ sym = self.append_array_reference(
+ diff_basis_name,
+ [":", ":", ":"],
+ LFRicTypes("LFRicRealScalarDataType")())
self.append(sym.name, var_accesses)
def field_bcs_kernel(self, function_space, var_accesses=None):
@@ -802,8 +818,7 @@ def field_bcs_kernel(self, function_space, var_accesses=None):
f"{self._kern.name} but got '{farg.argument_type}'")
base_name = "boundary_dofs_" + farg.name
- sym = self.append_array_reference(base_name, [":", ":"],
- ScalarType.Intrinsic.INTEGER)
+ sym = self.append_array_reference(base_name, [":", ":"])
self.append(sym.name, var_accesses)
def operator_bcs_kernel(self, function_space, var_accesses=None):
@@ -823,8 +838,7 @@ def operator_bcs_kernel(self, function_space, var_accesses=None):
# Checks for this are performed in ArgOrdering.generate()
op_arg = self._kern.arguments.args[0]
base_name = "boundary_dofs_" + op_arg.name
- sym = self.append_array_reference(base_name, [":", ":"],
- ScalarType.Intrinsic.INTEGER)
+ sym = self.append_array_reference(base_name, [":", ":"])
self.append(sym.name, var_accesses)
def mesh_properties(self, var_accesses=None):
@@ -895,14 +909,14 @@ def quad_rule(self, var_accesses=None):
self.append_integer_reference(arg)
elif generic_name in ["weights_xy", "weights_z"]:
# 1d arrays:
- # TODO # 1910: These should be pointers
- self.append_array_reference(arg, [":"],
- ScalarType.Intrinsic.REAL)
+ self.append_array_reference(
+ arg, [":"],
+ LFRicTypes("LFRicRealScalarDataType")())
elif generic_name in ["weights_xyz"]:
# 2d arrays:
- # TODO #1910: These should be pointers
- self.append_array_reference(arg, [":", ":"],
- ScalarType.Intrinsic.REAL)
+ self.append_array_reference(
+ arg, [":", ":"],
+ LFRicTypes("LFRicRealScalarDataType")())
else:
raise InternalError(f"Found invalid kernel argument "
f"'{arg}'.")
@@ -963,18 +977,20 @@ def ndf_positions(self):
"before the ndf_positions() method")
return self._ndf_positions
- def cell_ref_name(self, var_accesses=None):
- '''Utility routine which determines whether to return the cell value
- or the colourmap lookup value. If supplied it also stores this access
- in var_accesses.
+ def cell_ref_name(
+ self, var_accesses: Optional[VariablesAccessInfo] = None
+ ) -> Tuple[str, Reference]:
+ ''' Utility routine which determines whether to return the cell
+ reference or the colourmap lookup array reference. If supplied with
+ a "var_accesses" it also stores the Variables Access information.
- :param var_accesses: optional VariablesAccessInfo instance to store \
+ :param var_accesses: optional VariablesAccessInfo instance to store
the information about variable accesses.
- :type var_accesses: \
- :py:class:`psyclone.core.VariablesAccessInfo`
- :returns: the Fortran code needed to access the current cell index.
- :rtype: Tuple[str, py:class:`psyclone.psyir.nodes.Reference`]
+ :returns: the reference needed to access the current cell index.
+
+ TODO #2874: The name, argument, and first tuple component of this
+ and similar methods should be refactored.
'''
cell_sym = self._symtab.find_or_create_integer_symbol(
@@ -982,17 +998,10 @@ def cell_ref_name(self, var_accesses=None):
if self._kern.is_coloured():
colour_sym = self._symtab.find_or_create_integer_symbol(
"colour", tag="colours_loop_idx")
- if self._kern.is_intergrid:
- tag = None
- else:
- # If there is only one colourmap we need to specify the tag
- # to make sure we get the right symbol.
- tag = "cmap"
- array_ref = self.get_array_reference(self._kern.colourmap,
- [Reference(colour_sym),
- Reference(cell_sym)],
- ScalarType.Intrinsic.INTEGER,
- tag=tag)
+ symbol = self._kern.colourmap
+ array_ref = ArrayReference.create(
+ symbol,
+ [Reference(colour_sym), Reference(cell_sym)])
if var_accesses is not None:
var_accesses.add_access(Signature(colour_sym.name),
AccessType.READ, self._kern)
@@ -1002,8 +1011,7 @@ def cell_ref_name(self, var_accesses=None):
AccessType.READ,
self._kern, ["colour", "cell"])
- return (self._kern.colourmap + "(colour,cell)",
- array_ref)
+ return (array_ref.debug_string(), array_ref)
if var_accesses is not None:
var_accesses.add_access(Signature("cell"), AccessType.READ,
diff --git a/src/psyclone/domain/lfric/kern_stub_arg_list.py b/src/psyclone/domain/lfric/kern_stub_arg_list.py
index 728918352f..050f3aab0e 100644
--- a/src/psyclone/domain/lfric/kern_stub_arg_list.py
+++ b/src/psyclone/domain/lfric/kern_stub_arg_list.py
@@ -41,7 +41,6 @@
from psyclone.domain.lfric.arg_ordering import ArgOrdering
from psyclone.domain.lfric.lfric_constants import LFRicConstants
-from psyclone.domain.lfric.lfric_symbol_table import LFRicSymbolTable
from psyclone.errors import InternalError
@@ -61,10 +60,6 @@ class KernStubArgList(ArgOrdering):
'''
def __init__(self, kern):
ArgOrdering.__init__(self, kern)
- # TODO 719 The stub_symtab is not connected to other parts of the
- # Stub generation. Also the symboltable already has an
- # argument_list that may be able to replace the argument list below.
- self._stub_symtab = LFRicSymbolTable()
def cell_position(self, var_accesses=None):
'''Adds a cell argument to the argument list and if supplied stores
@@ -188,7 +183,7 @@ def stencil_unknown_extent(self, arg, var_accesses=None):
# Import here to avoid circular dependency
# pylint: disable=import-outside-toplevel
from psyclone.domain.lfric.lfric_stencils import LFRicStencils
- name = LFRicStencils.dofmap_size_symbol(self._stub_symtab, arg).name
+ name = LFRicStencils.dofmap_size_symbol(self._symtab, arg).name
self.append(name, var_accesses)
def stencil_unknown_direction(self, arg, var_accesses=None):
@@ -207,7 +202,7 @@ def stencil_unknown_direction(self, arg, var_accesses=None):
# Import here to avoid circular dependency
# pylint: disable=import-outside-toplevel
from psyclone.domain.lfric.lfric_stencils import LFRicStencils
- name = LFRicStencils.direction_name(self._stub_symtab, arg)
+ name = LFRicStencils.direction_name(self._symtab, arg).name
self.append(name, var_accesses)
def stencil(self, arg, var_accesses=None):
@@ -227,7 +222,7 @@ def stencil(self, arg, var_accesses=None):
# Import here to avoid circular dependency
# pylint: disable=import-outside-toplevel
from psyclone.domain.lfric.lfric_stencils import LFRicStencils
- var_name = LFRicStencils.dofmap_symbol(self._stub_symtab, arg).name
+ var_name = LFRicStencils.dofmap_symbol(self._symtab, arg).name
self.append(var_name, var_accesses)
def stencil_2d_max_extent(self, arg, var_accesses=None):
@@ -248,7 +243,7 @@ def stencil_2d_max_extent(self, arg, var_accesses=None):
# Import here to avoid circular dependency
# pylint: disable=import-outside-toplevel
from psyclone.domain.lfric.lfric_stencils import LFRicStencils
- name = LFRicStencils.max_branch_length_name(self._stub_symtab, arg)
+ name = LFRicStencils.max_branch_length(self._symtab, arg).name
self.append(name, var_accesses)
def stencil_2d_unknown_extent(self, arg, var_accesses=None):
@@ -267,7 +262,7 @@ def stencil_2d_unknown_extent(self, arg, var_accesses=None):
# Import here to avoid circular dependency
# pylint: disable=import-outside-toplevel
from psyclone.domain.lfric.lfric_stencils import LFRicStencils
- name = LFRicStencils.dofmap_size_symbol(self._stub_symtab, arg).name
+ name = LFRicStencils.dofmap_size_symbol(self._symtab, arg).name
self.append(name, var_accesses)
def stencil_2d(self, arg, var_accesses=None):
@@ -294,7 +289,7 @@ def stencil_2d(self, arg, var_accesses=None):
# Import here to avoid circular dependency
# pylint: disable=import-outside-toplevel
from psyclone.domain.lfric.lfric_stencils import LFRicStencils
- var_name = LFRicStencils.dofmap_symbol(self._stub_symtab, arg).name
+ var_name = LFRicStencils.dofmap_symbol(self._symtab, arg).name
self.append(var_name, var_accesses)
def operator(self, arg, var_accesses=None):
diff --git a/src/psyclone/domain/lfric/lfric_builtins.py b/src/psyclone/domain/lfric/lfric_builtins.py
index ae2c142014..dd2e077873 100644
--- a/src/psyclone/domain/lfric/lfric_builtins.py
+++ b/src/psyclone/domain/lfric/lfric_builtins.py
@@ -55,6 +55,7 @@
from psyclone.psyGen import BuiltIn
from psyclone.psyir.nodes import (ArrayReference, Assignment, BinaryOperation,
Reference, IntrinsicCall)
+from psyclone.psyir.symbols import UnsupportedFortranType
from psyclone.utils import a_or_an
# The name of the file containing the meta-data describing the
@@ -2735,10 +2736,13 @@ def lower_to_language_level(self):
# Create the PSyIR for the kernel:
# proxy0%data(df) = INT(proxy1%data, kind=i_)
lhs = arg_refs[0]
- i_precision = arg_refs[0].datatype.precision
+ datatype = arg_refs[0].symbol.datatype
+ if isinstance(datatype, UnsupportedFortranType):
+ datatype = datatype.partial_datatype
+
rhs = IntrinsicCall.create(
IntrinsicCall.Intrinsic.INT,
- [arg_refs[1], ("kind", Reference(i_precision))])
+ [arg_refs[1], ("kind", Reference(datatype.precision))])
# Create assignment and replace node
return self._replace_with_assignment(lhs, rhs)
@@ -2790,10 +2794,13 @@ def lower_to_language_level(self):
# Create the PSyIR for the kernel:
# proxy0%data(df) = REAL(proxy1%data, kind=r_)
lhs = arg_refs[0]
- r_precision = arg_refs[0].datatype.precision
+ datatype = arg_refs[0].symbol.datatype
+ if isinstance(datatype, UnsupportedFortranType):
+ datatype = datatype.partial_datatype
+
rhs = IntrinsicCall.create(
IntrinsicCall.Intrinsic.REAL,
- [arg_refs[1], ("kind", Reference(r_precision))])
+ [arg_refs[1], ("kind", Reference(datatype.precision))])
# Create assignment and replace node
return self._replace_with_assignment(lhs, rhs)
diff --git a/src/psyclone/domain/lfric/lfric_cell_iterators.py b/src/psyclone/domain/lfric/lfric_cell_iterators.py
index 413bfa3ddd..e03d84bfcb 100644
--- a/src/psyclone/domain/lfric/lfric_cell_iterators.py
+++ b/src/psyclone/domain/lfric/lfric_cell_iterators.py
@@ -39,12 +39,12 @@
''' This module implements the LFRicCellIterators collection which handles
the requirements of kernels that operator on cells.'''
-from psyclone.configuration import Config
from psyclone.domain.lfric.lfric_collection import LFRicCollection
from psyclone.domain.lfric.lfric_kern import LFRicKern
from psyclone.domain.lfric.lfric_types import LFRicTypes
from psyclone.errors import GenerationError
-from psyclone.f2pygen import AssignGen, CommentGen, DeclGen
+from psyclone.psyir.nodes import Assignment, Reference
+from psyclone.psyir.symbols import ArgumentInterface
class LFRicCellIterators(LFRicCollection):
@@ -66,93 +66,71 @@ def __init__(self, kern_or_invoke):
# (for invokes) the kernel argument to which each corresponds.
self._nlayers_names = {}
- if not self._invoke:
- # We are dealing with a single Kernel so there is only one
- # 'nlayers' variable and we don't need to store the associated
+ if self._invoke:
+ # Each kernel that operates on either the domain or cell-columns
+ # needs an 'nlayers' obtained from the first field/operator
# argument.
- self._nlayers_names[self._symbol_table.find_or_create_tag(
- "nlayers",
- symbol_type=LFRicTypes("MeshHeightDataSymbol")).name] = None
- # We're not generating a PSy layer so we're done here.
- return
-
- # Each kernel that operates on either the domain or cell-columns needs
- # an 'nlayers' obtained from the first field/operator argument.
- for kern in self._invoke.schedule.walk(LFRicKern):
- if kern.iterates_over != "dof":
- arg = kern.arguments.first_field_or_operator
- sym = self._symbol_table.find_or_create_tag(
- f"nlayers_{arg.name}",
- symbol_type=LFRicTypes("MeshHeightDataSymbol"))
- self._nlayers_names[sym.name] = arg
-
- first_var = None
- for var in self._invoke.psy_unique_vars:
- if not var.is_scalar:
- first_var = var
- break
- if not first_var:
- raise GenerationError(
- "Cannot create an Invoke with no field/operator arguments.")
- self._first_var = first_var
-
- def _invoke_declarations(self, parent):
- '''
- Declare entities required for iterating over cells in the Invoke.
-
- :param parent: the f2pygen node representing the PSy-layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
- '''
- api_config = Config.get().api_conf("lfric")
-
- # Declare the number of layers in the mesh for each kernel that
- # operates on cell-columns or the domain.
- name_list = list(self._nlayers_names.keys())
- if name_list:
- name_list.sort() # Purely for test reproducibility.
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=name_list))
-
- def _stub_declarations(self, parent):
+ for kern in self._invoke.schedule.walk(LFRicKern):
+ if kern.iterates_over != "dof":
+ arg = kern.arguments.first_field_or_operator
+ sym = self.symtab.find_or_create_tag(
+ f"nlayers_{arg.name}",
+ symbol_type=LFRicTypes("MeshHeightDataSymbol"))
+ self._nlayers_names[sym.name] = arg
+
+ first_var = None
+ for var in self._invoke.psy_unique_vars:
+ if not var.is_scalar:
+ first_var = var
+ break
+ if not first_var:
+ raise GenerationError(
+ "Cannot create an Invoke with no field/operator "
+ "arguments.")
+ self._first_var = first_var
+
+ def stub_declarations(self):
'''
Declare entities required for a kernel stub that operates on
cell-columns.
- :param parent: the f2pygen node representing the Kernel stub.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
- api_config = Config.get().api_conf("lfric")
-
+ super().stub_declarations()
if self._kernel.cma_operation not in ["apply", "matrix-matrix"]:
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in",
- entity_decls=list(self._nlayers_names.keys())))
+ nlayers = self.symtab.find_or_create_tag(
+ "nlayers",
+ symbol_type=LFRicTypes("MeshHeightDataSymbol")
+ )
+ nlayers.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(nlayers)
- def initialise(self, parent):
+ def initialise(self, cursor):
'''
- Look-up the number of vertical layers in the mesh for each user-
- supplied kernel that operates on cell columns.
+ Look-up the number of vertical layers in the mesh in the PSy layer.
+
+ :param int cursor: position where to add the next initialisation
+ statements.
- :param parent: the f2pygen node representing the PSy-layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :returns: Updated cursor value.
+ :rtype: int
'''
if not self._nlayers_names or not self._invoke:
- return
+ return cursor
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " Initialise number of layers"))
- parent.add(CommentGen(parent, ""))
# Sort for test reproducibility
sorted_names = list(self._nlayers_names.keys())
sorted_names.sort()
+ init_cursor = cursor
for name in sorted_names:
+ symbol = self.symtab.lookup(name)
var = self._nlayers_names[name]
- parent.add(AssignGen(
- parent, lhs=name,
- rhs=(f"{var.proxy_name_indexed}%{var.ref_name()}%"
- f"get_nlayers()")))
+ stmt = Assignment.create(
+ lhs=Reference(symbol),
+ rhs=var.generate_method_call("get_nlayers"))
+ if cursor == init_cursor:
+ stmt.preceding_comment = "Initialise number of layers"
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
+ return cursor
diff --git a/src/psyclone/domain/lfric/lfric_collection.py b/src/psyclone/domain/lfric/lfric_collection.py
index 9879d21e22..c7c6fbccf8 100644
--- a/src/psyclone/domain/lfric/lfric_collection.py
+++ b/src/psyclone/domain/lfric/lfric_collection.py
@@ -40,12 +40,13 @@
base class for managing the declaration and initialisation of a group of
related entities within an Invoke or Kernel stub.'''
-# Imports
import abc
+from typing import List
+
from psyclone.domain.lfric.lfric_invoke import LFRicInvoke
from psyclone.domain.lfric.lfric_kern import LFRicKern
-from psyclone.domain.lfric.lfric_symbol_table import LFRicSymbolTable
from psyclone.errors import InternalError
+from psyclone.psyGen import Kern
class LFRicCollection():
@@ -67,18 +68,10 @@ def __init__(self, node):
# We are handling declarations/initialisations for an Invoke
self._invoke = node
self._kernel = None
- self._symbol_table = self._invoke.schedule.symbol_table
- # The list of Kernel calls we are responsible for
- self._calls = node.schedule.kernels()
elif isinstance(node, LFRicKern):
# We are handling declarations for a Kernel stub
self._invoke = None
self._kernel = node
- # TODO #719 The symbol table is not connected to other parts of
- # the Stub generation.
- self._symbol_table = LFRicSymbolTable()
- # We only have a single Kernel call in this case
- self._calls = [node]
else:
raise InternalError(f"LFRicCollection takes only an LFRicInvoke "
f"or an LFRicKern but got: {type(node)}")
@@ -90,62 +83,73 @@ def __init__(self, node):
else:
self._dofs_only = False
- def declarations(self, parent):
+ @property
+ def symtab(self):
'''
- Insert declarations for all necessary variables into the AST of
- the generated code. Simply calls either '_invoke_declarations()' or
- '_stub_declarations()' depending on whether we're handling an Invoke
- or a Kernel stub.
-
- :param parent: the node in the f2pygen AST representing the routine \
- in which to insert the declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :returns: associated symbol table.
+ :rtype: :py:class:`psyclone.psyir.symbols.SymbolTable`
+ '''
+ if self._invoke:
+ return self._invoke.schedule.symbol_table
+ # Otherwise it is a kernel
+ return self._kernel._stub_symbol_table
- :raises InternalError: if neither 'self._invoke' nor 'self._kernel' \
- are set.
+ @property
+ def kernel_calls(self) -> List[Kern]:
+ '''
+ :returns: associated kernels calls.
'''
if self._invoke:
- self._invoke_declarations(parent)
- elif self._kernel:
- self._stub_declarations(parent)
- else:
- raise InternalError("LFRicCollection has neither a Kernel "
- "nor an Invoke - should be impossible.")
+ return self._invoke.schedule.kernels()
+ # Otherwise it is a kernel
+ return [self._kernel]
- def initialise(self, parent):
+ @abc.abstractmethod
+ def initialise(self, cursor: int) -> int:
'''
Add code to initialise the entities being managed by this class.
We do nothing by default - it is up to the sub-class to override
this method if initialisation is required.
- :param parent: the node in the f2pygen AST to which to add \
- initialisation code.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
'''
- @abc.abstractmethod
- def _invoke_declarations(self, parent):
+ def invoke_declarations(self):
'''
- Add all necessary declarations for an Invoke.
+ Add necessary Invoke declarations for this Collection.
+
+ By default we just sanity check that the class is appropriately
+ initialised - it is up to the sub-class to add required declarations.
- :param parent: node in the f2pygen AST representing the Invoke to \
- which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :raises InternalError: if the class has been instantiated for a
+ kernel and not an invoke.
'''
+ if not self._invoke:
+ raise InternalError(
+ f"invoke_declarations() can only be called with a "
+ f"{type(self).__name__} instantiated for an invoke (not a "
+ f"kernel).")
- def _stub_declarations(self, parent):
+ def stub_declarations(self):
'''
- Add all necessary declarations for a Kernel stub. Not abstract because
- not all entities need representing within a Kernel.
+ Add necessary Kernel Stub declarations for this Collection.
- :param parent: node in the f2pygen AST representing the Kernel stub \
- to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ By default we just sanity check that the class is appropriately
+ initialised - it is up to the sub-class to add required declarations.
+ :raises InternalError: if the class has been instantiated for an
+ invoke and not a kernel.
'''
+ if not self._kernel:
+ raise InternalError(
+ f"stub_declarations() can only be called with a "
+ f"{type(self).__name__} instantiated for a kernel (not an "
+ f"invoke).")
# ---------- Documentation utils -------------------------------------------- #
diff --git a/src/psyclone/domain/lfric/lfric_dofmaps.py b/src/psyclone/domain/lfric/lfric_dofmaps.py
index d9976d7d63..3e80744de5 100644
--- a/src/psyclone/domain/lfric/lfric_dofmaps.py
+++ b/src/psyclone/domain/lfric/lfric_dofmaps.py
@@ -50,10 +50,11 @@
from collections import OrderedDict
from psyclone import psyGen
-from psyclone.configuration import Config
-from psyclone.domain.lfric import LFRicCollection
+from psyclone.domain.lfric import LFRicCollection, LFRicTypes, LFRicConstants
from psyclone.errors import GenerationError, InternalError
-from psyclone.f2pygen import AssignGen, CommentGen, DeclGen
+from psyclone.psyir.nodes import Assignment, Reference, StructureReference
+from psyclone.psyir.symbols import (
+ UnsupportedFortranType, DataSymbol, ArgumentInterface, ArrayType)
class LFRicDofmaps(LFRicCollection):
@@ -87,7 +88,7 @@ def __init__(self, node):
# "argument" and "direction" entries.
self._unique_indirection_maps = OrderedDict()
- for call in self._calls:
+ for call in self.kernel_calls:
# We only need a dofmap if the kernel operates on cells
# rather than dofs.
if call.iterates_over != "dof":
@@ -155,105 +156,121 @@ def __init__(self, node):
"argument": cma_args[0],
"direction": "from"}
- def initialise(self, parent):
- ''' Generates the calls to the LFRic infrastructure that
- look-up the necessary dofmaps. Adds these calls as children
- of the supplied parent node. This must be an appropriate
- f2pygen object. '''
-
- # If we've got no dofmaps then we do nothing
- if self._unique_fs_maps:
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent,
- " Look-up dofmaps for each function space"))
- parent.add(CommentGen(parent, ""))
-
- for dmap, field in self._unique_fs_maps.items():
- parent.add(AssignGen(parent, pointer=True, lhs=dmap,
- rhs=field.proxy_name_indexed +
- "%" + field.ref_name() +
- "%get_whole_dofmap()"))
- if self._unique_cbanded_maps:
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent,
- " Look-up required column-banded dofmaps"))
- parent.add(CommentGen(parent, ""))
-
- for dmap, cma in self._unique_cbanded_maps.items():
- parent.add(AssignGen(parent, pointer=True, lhs=dmap,
- rhs=cma["argument"].proxy_name_indexed +
- "%column_banded_dofmap_" +
- cma["direction"]))
-
- if self._unique_indirection_maps:
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent,
- " Look-up required CMA indirection dofmaps"))
- parent.add(CommentGen(parent, ""))
-
- for dmap, cma in self._unique_indirection_maps.items():
- parent.add(AssignGen(parent, pointer=True, lhs=dmap,
- rhs=cma["argument"].proxy_name_indexed +
- "%indirection_dofmap_"+cma["direction"]))
-
- def _invoke_declarations(self, parent):
+ def initialise(self, cursor: int) -> int:
'''
- Declare all unique function space dofmaps in the PSy layer as pointers
- to integer arrays of rank 2.
+ Add code to initialise the entities being managed by this class.
- :param parent: the f2pygen node to which to add the declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
'''
- api_config = Config.get().api_conf("lfric")
+ first = True
+ for dmap, field in self._unique_fs_maps.items():
+ stmt = Assignment.create(
+ lhs=Reference(self.symtab.lookup(dmap)),
+ rhs=field.generate_method_call("get_whole_dofmap"),
+ is_pointer=True)
+ if first:
+ stmt.preceding_comment = (
+ "Look-up dofmaps for each function space")
+ first = False
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
+
+ first = True
+ for dmap, cma in self._unique_cbanded_maps.items():
+ stmt = Assignment.create(
+ lhs=Reference(self.symtab.lookup(dmap)),
+ rhs=StructureReference.create(
+ self._invoke.schedule.symbol_table.lookup(
+ cma["argument"].proxy_name),
+ [f"column_banded_dofmap_{cma['direction']}"]),
+ is_pointer=True)
+ if first:
+ stmt.preceding_comment = (
+ "Look-up required column-banded dofmaps"
+ )
+ first = False
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
+
+ first = True
+ for dmap, cma in self._unique_indirection_maps.items():
+ stmt = Assignment.create(
+ lhs=Reference(self.symtab.lookup(dmap)),
+ rhs=StructureReference.create(
+ self._invoke.schedule.symbol_table.lookup(
+ cma["argument"].proxy_name_indexed),
+ [f"indirection_dofmap_{cma['direction']}"]),
+ is_pointer=True)
+ if first:
+ stmt.preceding_comment = (
+ "Look-up required CMA indirection dofmaps"
+ )
+ first = False
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
+ return cursor
+
+ def invoke_declarations(self):
+ '''
+ Declare all unique function space dofmaps in the PSy layer as pointers
+ to integer arrays of rank 2.
+ '''
+ super().invoke_declarations()
# Function space dofmaps
- decl_map_names = \
- [dmap+"(:,:) => null()" for dmap in sorted(self._unique_fs_maps)]
-
- if decl_map_names:
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- pointer=True, entity_decls=decl_map_names))
+ for dmap in sorted(self._unique_fs_maps):
+ if dmap not in self.symtab:
+ dmap_sym = DataSymbol(
+ dmap, UnsupportedFortranType(
+ f"integer(kind=i_def), pointer :: {dmap}(:,:) "
+ f"=> null()"))
+ self.symtab.add(dmap_sym, tag=dmap)
# Column-banded dofmaps
- decl_bmap_names = \
- [dmap+"(:,:) => null()" for dmap in self._unique_cbanded_maps]
- if decl_bmap_names:
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- pointer=True, entity_decls=decl_bmap_names))
+ for dmap in sorted(self._unique_cbanded_maps):
+ if dmap not in self.symtab:
+ dmap_sym = DataSymbol(
+ dmap, UnsupportedFortranType(
+ f"integer(kind=i_def), pointer :: {dmap}(:,:) "
+ f"=> null()"))
+ self.symtab.add(dmap_sym, tag=dmap)
# CMA operator indirection dofmaps
- decl_ind_map_names = \
- [dmap+"(:) => null()" for dmap in self._unique_indirection_maps]
- if decl_ind_map_names:
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- pointer=True, entity_decls=decl_ind_map_names))
-
- def _stub_declarations(self, parent):
+ for dmap in sorted(self._unique_indirection_maps):
+ if dmap not in self.symtab:
+ dmap_sym = DataSymbol(
+ dmap, UnsupportedFortranType(
+ f"integer(kind=i_def), pointer :: {dmap}(:) "
+ "=> null()"))
+ self.symtab.add(dmap_sym, tag=dmap)
+
+ def stub_declarations(self):
'''
Add dofmap-related declarations to a Kernel stub.
- :param parent: node in the f2pygen AST representing the Kernel stub.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
- api_config = Config.get().api_conf("lfric")
-
+ super().stub_declarations()
# Function space dofmaps
for dmap in sorted(self._unique_fs_maps):
# We declare ndf first as some compilers require this
ndf_name = \
self._unique_fs_maps[dmap].function_space.ndf_name
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", entity_decls=[ndf_name]))
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", dimension=ndf_name,
- entity_decls=[dmap]))
+ dim = self.symtab.find_or_create(
+ ndf_name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ dim.interface = ArgumentInterface(ArgumentInterface.Access.READ)
+ self.symtab.append_argument(dim)
+ dmap_symbol = self.symtab.find_or_create(
+ dmap, symbol_type=DataSymbol,
+ datatype=ArrayType(LFRicTypes("LFRicIntegerScalarDataType")(),
+ [Reference(dim)]))
+ dmap_symbol.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(dmap_symbol)
+
# Column-banded dofmaps
for dmap, cma in self._unique_cbanded_maps.items():
if cma["direction"] == "to":
@@ -265,32 +282,57 @@ def _stub_declarations(self, parent):
f"Invalid direction ('{cma['''direction''']}') found for "
f"CMA operator when collecting column-banded dofmaps. "
f"Should be either 'to' or 'from'.")
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", entity_decls=[ndf_name]))
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in",
- dimension=",".join([ndf_name, "nlayers"]),
- entity_decls=[dmap]))
+ symbol = self.symtab.find_or_create(
+ ndf_name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ symbol.interface = ArgumentInterface(ArgumentInterface.Access.READ)
+ self.symtab.append_argument(symbol)
+
+ nlayers = self.symtab.find_or_create_tag(
+ "nlayers",
+ symbol_type=LFRicTypes("MeshHeightDataSymbol")
+ )
+ nlayers.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(nlayers)
+
+ dmap_symbol = self.symtab.find_or_create(
+ dmap, symbol_type=DataSymbol,
+ datatype=ArrayType(LFRicTypes("LFRicIntegerScalarDataType")(),
+ [Reference(symbol), Reference(nlayers)]))
+ dmap_symbol.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(dmap_symbol)
+
# CMA operator indirection dofmaps
+ const = LFRicConstants()
+ suffix = const.ARG_TYPE_SUFFIX_MAPPING["gh_columnwise_operator"]
for dmap, cma in self._unique_indirection_maps.items():
if cma["direction"] == "to":
- dim_name = cma["argument"].name + "_nrow"
+ param = "nrow"
elif cma["direction"] == "from":
- dim_name = cma["argument"].name + "_ncol"
+ param = "ncol"
else:
raise InternalError(
f"Invalid direction ('{cma['''direction''']}') found for "
f"CMA operator when collecting indirection dofmaps. "
f"Should be either 'to' or 'from'.")
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", entity_decls=[dim_name]))
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", dimension=dim_name,
- entity_decls=[dmap]))
+ arg_name = cma["argument"].name
+ dim = self.symtab.find_or_create_tag(
+ f"{arg_name}:{param}:{suffix}",
+ root_name=f"{arg_name}_{param}",
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ dim.interface = ArgumentInterface(ArgumentInterface.Access.READ)
+ self.symtab.append_argument(dim)
+
+ dmap_symbol = self.symtab.find_or_create(
+ dmap, symbol_type=DataSymbol,
+ datatype=ArrayType(LFRicTypes("LFRicIntegerScalarDataType")(),
+ [Reference(dim)]))
+ dmap_symbol.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(dmap_symbol)
# The list of module members that we wish AutoAPI to generate
diff --git a/src/psyclone/domain/lfric/lfric_extract_driver_creator.py b/src/psyclone/domain/lfric/lfric_extract_driver_creator.py
index d3a2824e31..5e6176c638 100644
--- a/src/psyclone/domain/lfric/lfric_extract_driver_creator.py
+++ b/src/psyclone/domain/lfric/lfric_extract_driver_creator.py
@@ -602,7 +602,10 @@ def _sym_is_field(sym):
print(f"Error finding symbol '{sig_str}' in "
f"'{module_name}'.")
else:
- orig_sym = original_symbol_table.lookup(signature[0])
+ try:
+ orig_sym = original_symbol_table.lookup(signature[0])
+ except KeyError:
+ print(f"Error finding symbol '{signature[0]}'")
if orig_sym and orig_sym.is_array and _sym_is_field(orig_sym):
# This is a field vector, so add all individual fields
diff --git a/src/psyclone/domain/lfric/lfric_fields.py b/src/psyclone/domain/lfric/lfric_fields.py
index 1eeb0aa742..4fbe536f33 100644
--- a/src/psyclone/domain/lfric/lfric_fields.py
+++ b/src/psyclone/domain/lfric/lfric_fields.py
@@ -48,7 +48,10 @@
from psyclone import psyGen
from psyclone.domain.lfric import LFRicCollection, LFRicConstants
from psyclone.errors import InternalError
-from psyclone.f2pygen import DeclGen, TypeDeclGen
+from psyclone.psyir.nodes import Reference
+from psyclone.psyir.symbols import (
+ ArgumentInterface, DataSymbol, ScalarType, ArrayType, UnresolvedType,
+ ImportInterface)
class LFRicFields(LFRicCollection):
@@ -57,7 +60,7 @@ class LFRicFields(LFRicCollection):
or Kernel stub.
'''
- def _invoke_declarations(self, parent):
+ def invoke_declarations(self):
'''
Add field-related declarations to the PSy-layer routine.
Note: PSy layer in LFRic does not modify the field objects. Hence,
@@ -65,14 +68,11 @@ def _invoke_declarations(self, parent):
is only pointed to from the field object and is thus not a part of
the object).
- :param parent: the node in the f2pygen AST representing the PSy-layer
- routine to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
:raises InternalError: for unsupported intrinsic types of field
argument data.
'''
+ super().invoke_declarations()
# Create dict of all field arguments for checks
const = LFRicConstants()
fld_args = self._invoke.unique_declarations(
@@ -115,32 +115,25 @@ def _invoke_declarations(self, parent):
# they contain a pointer to the data that is modified.
for fld_type, fld_mod in field_datatype_map:
args = field_datatype_map[(fld_type, fld_mod)]
- arg_list = [arg.declaration_name for arg in args]
- parent.add(TypeDeclGen(parent, datatype=fld_type,
- entity_decls=arg_list,
- intent="in"))
- (self._invoke.invokes.psy.
- infrastructure_modules[fld_mod].add(fld_type))
-
- def _stub_declarations(self, parent):
+ for arg in args:
+ arg_symbol = self.symtab.lookup(arg.name)
+ arg_symbol.interface.access = ArgumentInterface.Access.READ
+
+ def stub_declarations(self):
'''
Add field-related declarations to a Kernel stub.
- :param parent: the node in the f2pygen AST representing the Kernel
- stub to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
:raises InternalError: for an unsupported data type of field
argument data.
'''
+ super().stub_declarations()
const = LFRicConstants()
fld_args = psyGen.args_filter(
self._kernel.args, arg_types=const.VALID_FIELD_NAMES)
for fld in fld_args:
undf_name = fld.function_space.undf_name
- fld_dtype = fld.intrinsic_type
fld_kind = fld.precision
# Check for invalid descriptor data type
@@ -152,22 +145,39 @@ def _stub_declarations(self, parent):
f"'{fld.declaration_name}'. Supported types are "
f"{const.VALID_FIELD_DATA_TYPES}.")
+ # Create the PSyIR DataType
+ kind_sym = self.symtab.find_or_create(
+ fld_kind, symbol_type=DataSymbol, datatype=UnresolvedType(),
+ interface=ImportInterface(
+ self.symtab.lookup("constants_mod")))
+ if fld.intrinsic_type == "real":
+ intr = ScalarType(ScalarType.Intrinsic.REAL, kind_sym)
+ elif fld.intrinsic_type == "integer":
+ intr = ScalarType(ScalarType.Intrinsic.INTEGER, kind_sym)
+
+ undf_sym = self.symtab.find_or_create(undf_name)
+ datatype = ArrayType(intr, [Reference(undf_sym)])
+
+ if fld.intent == "in":
+ intent = ArgumentInterface.Access.READ
+ elif fld.intent == "inout":
+ intent = ArgumentInterface.Access.READWRITE
+
if fld.vector_size > 1:
for idx in range(1, fld.vector_size+1):
text = (fld.name + "_" +
fld.function_space.mangled_name +
"_v" + str(idx))
- parent.add(
- DeclGen(parent, datatype=fld_dtype, kind=fld_kind,
- dimension=undf_name,
- intent=fld.intent, entity_decls=[text]))
+ arg = self.symtab.find_or_create(
+ text, symbol_type=DataSymbol, datatype=datatype)
+ arg.interface = ArgumentInterface(intent)
+ self.symtab.append_argument(arg)
else:
- parent.add(
- DeclGen(parent, datatype=fld_dtype, kind=fld_kind,
- intent=fld.intent,
- dimension=undf_name,
- entity_decls=[fld.name + "_" +
- fld.function_space.mangled_name]))
+ name = fld.name + "_" + fld.function_space.mangled_name
+ arg = self.symtab.find_or_create(
+ name, symbol_type=DataSymbol, datatype=datatype)
+ arg.interface = ArgumentInterface(intent)
+ self.symtab.append_argument(arg)
# ---------- Documentation utils -------------------------------------------- #
diff --git a/src/psyclone/domain/lfric/lfric_halo_depths.py b/src/psyclone/domain/lfric/lfric_halo_depths.py
index 5b1e634f80..8bc0e5a910 100644
--- a/src/psyclone/domain/lfric/lfric_halo_depths.py
+++ b/src/psyclone/domain/lfric/lfric_halo_depths.py
@@ -42,8 +42,8 @@
from psyclone.configuration import Config
from psyclone.domain.lfric.lfric_collection import LFRicCollection
-from psyclone.f2pygen import DeclGen
from psyclone.psyir.nodes import Literal, Reference
+from psyclone.psyir.symbols import ArgumentInterface, DataSymbol
class LFRicHaloDepths(LFRicCollection):
@@ -66,7 +66,7 @@ def __init__(self, node):
# No distributed memory so there are no halo regions.
return
depth_names = set()
- for kern in self._calls:
+ for kern in self.kernel_calls:
if not kern.halo_depth:
continue
if not isinstance(kern.halo_depth, Literal):
@@ -84,23 +84,27 @@ def __init__(self, node):
depth_names.add(name)
self._halo_depth_vars.add(kern.halo_depth.symbol)
- def _invoke_declarations(self, parent):
+ def invoke_declarations(self):
'''
- Creates the f2pygen declarations for the depths to which any 'halo'
+ Creates the declarations for the depths to which any 'halo'
kernels iterate into the halos.
- :param parent: the node in the f2pygen AST representing the PSy-layer
- routine to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
+ super().invoke_declarations()
# Add the Invoke subroutine argument declarations for the
# different halo depths. They are declared as intent "in".
+ # pylint: disable=import-outside-toplevel
+ from psyclone.domain.lfric import LFRicTypes
if self._halo_depth_vars:
var_names = [sym.name for sym in self._halo_depth_vars]
var_names.sort()
- parent.add(DeclGen(parent, datatype="integer",
- entity_decls=var_names, intent="in"))
+ for name in var_names:
+ sym = self.symtab.find_or_create(
+ name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ sym.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(sym)
# ---------- Documentation utils -------------------------------------------- #
diff --git a/src/psyclone/domain/lfric/lfric_invoke.py b/src/psyclone/domain/lfric/lfric_invoke.py
index 9392b8fad2..10de10402c 100644
--- a/src/psyclone/domain/lfric/lfric_invoke.py
+++ b/src/psyclone/domain/lfric/lfric_invoke.py
@@ -39,23 +39,21 @@
''' This module implements the LFRic-specific implementation of the Invoke
base class from psyGen.py. '''
-# Imports
from psyclone.configuration import Config
from psyclone.core import AccessType
from psyclone.domain.lfric.lfric_constants import LFRicConstants
from psyclone.errors import GenerationError, FieldNotFoundError
-from psyclone.f2pygen import (AssignGen, CommentGen, DeclGen, SubroutineGen,
- UseGen)
from psyclone.psyGen import Invoke
-from psyclone.psyir.nodes import Literal
+from psyclone.psyir.nodes import Assignment, Reference, Call, Literal
+from psyclone.psyir.symbols import (
+ ContainerSymbol, RoutineSymbol, ImportInterface, DataSymbol, INTEGER_TYPE)
class LFRicInvoke(Invoke):
'''
The LFRic-specific Invoke class. This passes the LFRic-specific
InvokeSchedule class to the base class so it creates the one we
- require. Also overrides the 'gen_code' method so that we generate
- dynamo specific invocation code.
+ require.
:param alg_invocation: object containing the invoke call information.
:type alg_invocation: :py:class:`psyclone.parse.algorithm.InvokeCall`
@@ -78,13 +76,9 @@ def __init__(self, alg_invocation, idx, invokes):
# Import here to avoid circular dependency
# pylint: disable=import-outside-toplevel
from psyclone.domain.lfric import LFRicInvokeSchedule
- reserved_names_list = []
const = LFRicConstants()
- reserved_names_list.extend(const.STENCIL_MAPPING.values())
- reserved_names_list.extend(["omp_get_thread_num",
- "omp_get_max_threads"])
Invoke.__init__(self, alg_invocation, idx, LFRicInvokeSchedule,
- invokes, reserved_names=reserved_names_list)
+ invokes)
# The base class works out the algorithm code's unique argument
# list and stores it in the 'self._alg_unique_args'
@@ -276,26 +270,11 @@ def field_on_space(self, func_space):
return field
return None
- def gen_code(self, parent):
- '''
- Generates LFRic-specific invocation code (the subroutine
- called by the associated Invoke call in the algorithm
- layer). This consists of the PSy invocation subroutine and the
- declaration of its arguments.
-
- :param parent: the parent node in the AST (of the code to be \
- generated) to which the node describing the PSy \
- subroutine will be added.
- :type parent: :py:class:`psyclone.f2pygen.ModuleGen`
+ def setup_psy_layer_symbols(self):
+ ''' Declare, initialise and deallocate all symbols required by the
+ PSy-layer Invoke subroutine.
'''
- # Create the subroutine
- invoke_sub = SubroutineGen(parent, name=self.name,
- args=self.psy_unique_var_names +
- self.stencil.unique_alg_vars +
- self._psy_unique_qr_vars +
- self._alg_unique_halo_depth_args)
-
# Declare all quantities required by this PSy routine (Invoke)
for entities in [self.scalar_args, self.fields, self.lma_ops,
self.stencil, self.meshes,
@@ -306,31 +285,9 @@ def gen_code(self, parent):
self.reference_element_properties,
self.mesh_properties, self.loop_bounds,
self.run_time_checks]:
- entities.declarations(invoke_sub)
-
- # Initialise all quantities required by this PSy routine (Invoke)
-
- if self.schedule.reductions(reprod=True):
- # We have at least one reproducible reduction so we need
- # to know the number of OpenMP threads
- omp_function_name = "omp_get_max_threads"
- tag = "omp_num_threads"
- nthreads_name = \
- self.schedule.symbol_table.lookup_with_tag(tag).name
- invoke_sub.add(UseGen(invoke_sub, name="omp_lib", only=True,
- funcnames=[omp_function_name]))
- # Note: There is no assigned kind for 'integer' 'nthreads' as this
- # would imply assigning 'kind' to 'th_idx' and other elements of
- # the OMPParallelDirective
- invoke_sub.add(DeclGen(invoke_sub, datatype="integer",
- entity_decls=[nthreads_name]))
- invoke_sub.add(CommentGen(invoke_sub, ""))
- invoke_sub.add(CommentGen(
- invoke_sub, " Determine the number of OpenMP threads"))
- invoke_sub.add(CommentGen(invoke_sub, ""))
- invoke_sub.add(AssignGen(invoke_sub, lhs=nthreads_name,
- rhs=omp_function_name+"()"))
+ entities.invoke_declarations()
+ cursor = 0
for entities in [self.proxies, self.run_time_checks,
self.cell_iterators, self.meshes,
self.stencil, self.dofmaps,
@@ -338,29 +295,42 @@ def gen_code(self, parent):
self.function_spaces, self.evaluators,
self.reference_element_properties,
self.mesh_properties, self.loop_bounds]:
- entities.initialise(invoke_sub)
+ cursor = entities.initialise(cursor)
- # Now that everything is initialised and checked, we can call
- # our kernels
-
- invoke_sub.add(CommentGen(invoke_sub, ""))
+ if self.schedule.reductions(reprod=True):
+ # We have at least one reproducible reduction so we need
+ # to know the number of OpenMP threads
+ symtab = self.schedule.symbol_table
+ nthreads = symtab.find_or_create_tag(
+ "omp_num_threads",
+ root_name="nthreads",
+ symbol_type=DataSymbol,
+ datatype=INTEGER_TYPE)
+ omp_lib = symtab.find_or_create("omp_lib",
+ symbol_type=ContainerSymbol)
+ omp_get_max_threads = symtab.find_or_create(
+ "omp_get_max_threads", symbol_type=RoutineSymbol,
+ interface=ImportInterface(omp_lib))
+
+ assignment = Assignment.create(
+ lhs=Reference(nthreads),
+ rhs=Call.create(omp_get_max_threads))
+ assignment.append_preceding_comment(
+ "Determine the number of OpenMP threads")
+ self.schedule.addchild(assignment, 0)
+ cursor += 1
+
+ # Now that all initialisation is done, add the comment before
+ # the start of the kernels
if Config.get().distributed_memory:
- invoke_sub.add(CommentGen(invoke_sub, " Call kernels and "
- "communication routines"))
+ self.schedule[cursor].preceding_comment = (
+ "Call kernels and communication routines")
else:
- invoke_sub.add(CommentGen(invoke_sub, " Call our kernels"))
- invoke_sub.add(CommentGen(invoke_sub, ""))
-
- # Add content from the schedule
- self.schedule.gen_code(invoke_sub)
+ self.schedule[cursor].preceding_comment = (
+ "Call kernels")
# Deallocate any basis arrays
- self.evaluators.deallocate(invoke_sub)
-
- invoke_sub.add(CommentGen(invoke_sub, ""))
-
- # finally, add me to my parent
- parent.add(invoke_sub)
+ self.evaluators.deallocate()
# ---------- Documentation utils -------------------------------------------- #
diff --git a/src/psyclone/domain/lfric/lfric_invoke_schedule.py b/src/psyclone/domain/lfric/lfric_invoke_schedule.py
index a3bd453b7b..3f468a3cd3 100644
--- a/src/psyclone/domain/lfric/lfric_invoke_schedule.py
+++ b/src/psyclone/domain/lfric/lfric_invoke_schedule.py
@@ -62,9 +62,6 @@ class LFRicInvokeSchedule(InvokeSchedule):
algorithm layer.
:type alg_calls: Optional[list of
:py:class:`psyclone.parse.algorithm.KernelCall`]
- :param reserved_names: optional list of names that are not allowed in the
- new InvokeSchedule SymbolTable.
- :type reserved_names: list[str]
:param parent: the parent of this node in the PSyIR.
:type parent: :py:class:`psyclone.psyir.nodes.Node`
@@ -73,12 +70,11 @@ class LFRicInvokeSchedule(InvokeSchedule):
# symbol table.
_symbol_table_class = LFRicSymbolTable
- def __init__(self, symbol, alg_calls=None, reserved_names=None,
- parent=None, **kwargs):
+ def __init__(self, symbol, alg_calls=None, parent=None, **kwargs):
if not alg_calls:
alg_calls = []
super().__init__(symbol, LFRicKernCallFactory,
- LFRicBuiltInCallFactory, alg_calls, reserved_names,
+ LFRicBuiltInCallFactory, alg_calls,
parent=parent, **kwargs)
def node_str(self, colour=True):
diff --git a/src/psyclone/domain/lfric/lfric_kern.py b/src/psyclone/domain/lfric/lfric_kern.py
index 6db210e01e..0b42329806 100644
--- a/src/psyclone/domain/lfric/lfric_kern.py
+++ b/src/psyclone/domain/lfric/lfric_kern.py
@@ -46,20 +46,21 @@
from psyclone.configuration import Config
from psyclone.core import AccessType
from psyclone.domain.lfric.kern_call_arg_list import KernCallArgList
+from psyclone.domain.lfric.lfric_constants import LFRicConstants
+from psyclone.domain.lfric.lfric_symbol_table import LFRicSymbolTable
from psyclone.domain.lfric.kern_stub_arg_list import KernStubArgList
from psyclone.domain.lfric.kernel_interface import KernelInterface
-from psyclone.domain.lfric.lfric_constants import LFRicConstants
from psyclone.domain.lfric.lfric_types import LFRicTypes
from psyclone.errors import GenerationError, InternalError, FieldNotFoundError
-from psyclone.f2pygen import ModuleGen, SubroutineGen, UseGen
from psyclone.parse.algorithm import Arg, KernelCall
from psyclone.psyGen import InvokeSchedule, CodedKern, args_filter
from psyclone.psyir.frontend.fortran import FortranReader
from psyclone.psyir.frontend.fparser2 import Fparser2Reader
-from psyclone.psyir.nodes import (Loop, Literal, Reference,
- KernelSchedule)
-from psyclone.psyir.symbols import (DataSymbol, ScalarType, ArrayType,
- INTEGER_TYPE)
+from psyclone.psyir.nodes import (
+ Loop, Literal, Reference, KernelSchedule, Container, Routine)
+from psyclone.psyir.symbols import (
+ DataSymbol, ScalarType, ArrayType, UnsupportedFortranType, DataTypeSymbol,
+ UnresolvedType, ContainerSymbol, INTEGER_TYPE, UnresolvedInterface)
class LFRicKern(CodedKern):
@@ -97,6 +98,7 @@ def __init__(self):
from psyclone.dynamo0p3 import DynKernelArguments
self._arguments = DynKernelArguments(None, None) # for pyreverse
self._parent = None
+ self._stub_symbol_table = LFRicSymbolTable()
self._base_name = ""
self._func_descriptors = None
self._fs_descriptors = None
@@ -145,6 +147,12 @@ def reference_accesses(self, var_accesses):
# Use the KernelCallArgList class, which can also provide variable
# access information:
create_arg_list = KernCallArgList(self)
+ # KernCallArgList creates symbols (sometimes with wrong type), we don't
+ # want those to be kept in the SymbolTable, so we copy the symbol table
+ # TODO #2874: The design could be improved so that only the right
+ # symbols are created
+ tmp_symtab = self.ancestor(InvokeSchedule).symbol_table.deep_copy()
+ create_arg_list._forced_symtab = tmp_symtab
create_arg_list.generate(var_accesses)
super().reference_accesses(var_accesses)
@@ -326,10 +334,10 @@ def _setup(self, ktype, module_name, args, parent, check=True):
# variable name or a literal value.
freader = FortranReader()
invoke_schedule = self.ancestor(InvokeSchedule)
- table = invoke_schedule.symbol_table if invoke_schedule else None
+ symtab = invoke_schedule.symbol_table if invoke_schedule else None
if "halo" in ktype.iterates_over:
self._halo_depth = freader.psyir_from_expression(
- args[-1].text.lower(), symbol_table=table)
+ args[-1].text.lower(), symbol_table=symtab)
if isinstance(self._halo_depth, Reference):
# If we got a Reference, check whether we need to specialise
# the associated Symbol.
@@ -346,26 +354,16 @@ def _setup(self, ktype, module_name, args, parent, check=True):
# The quadrature-related arguments to a kernel always come last so
# construct an enumerator with start value -
+ if self.ancestor(InvokeSchedule):
+ symtab = self.ancestor(InvokeSchedule).symbol_table
+ else:
+ symtab = self._stub_symbol_table
+
start_value = -len(qr_shapes)
if self._halo_depth:
start_value -= 1
for idx, shape in enumerate(qr_shapes, start_value):
- qr_arg = args[idx]
-
- # Use the InvokeSchedule symbol_table to create a unique symbol
- # name for the whole Invoke.
- if qr_arg.varname:
- tag = "AlgArgs_" + qr_arg.text
- qr_name = table.find_or_create_integer_symbol(qr_arg.varname,
- tag=tag).name
- else:
- # If we don't have a name then we must be doing kernel-stub
- # generation so create a suitable name.
- # TODO #719 we don't yet have a symbol table to prevent
- # clashes.
- qr_name = "qr_"+shape.split("_")[-1]
-
# LFRic api kernels require quadrature rule arguments to be
# passed in if one or more basis functions are used by the kernel
# and gh_shape == "gh_quadrature_***".
@@ -384,10 +382,31 @@ def _setup(self, ktype, module_name, args, parent, check=True):
raise InternalError(f"Unsupported quadrature shape "
f"('{shape}') found in LFRicKern._setup")
+ qr_arg = args[idx]
+ quad_map = const.QUADRATURE_TYPE_MAP[shape]
+
+ # Use the InvokeSchedule or Stub symbol_table that we obtained
+ # earlier to create a unique symbol name
+ if qr_arg.varname:
+ # If we have a name for the qr argument, we are dealing with
+ # an Invoke
+ tag = "AlgArgs_" + qr_arg.text
+ qr_sym = symtab.find_or_create(
+ qr_arg.varname, tag=tag, symbol_type=DataSymbol,
+ datatype=symtab.find_or_create(
+ quad_map["type"], symbol_type=DataTypeSymbol,
+ datatype=UnresolvedType(),
+ interface=UnresolvedInterface())
+ )
+ qr_name = qr_sym.name
+ else:
+ # If we don't have a name then we must be doing kernel-stub
+ # generation so create a suitable name.
+ qr_name = "qr_"+shape.split("_")[-1]
+
# Append the name of the qr argument to the names of the qr-related
# variables.
qr_args = [arg + "_" + qr_name for arg in qr_args]
-
self._qr_rules[shape] = self.QRRule(qr_arg.text, qr_name, qr_args)
if "gh_evaluator" in self._eval_shapes:
@@ -451,12 +470,9 @@ def is_intergrid(self):
return self._intergrid_ref is not None
@property
- def colourmap(self):
+ def colourmap(self) -> DataSymbol:
'''
- Getter for the name of the colourmap associated with this kernel call.
-
- :returns: name of the colourmap (Fortran array).
- :rtype: str
+ :returns: the symbol representing the colourmap for this kernel call.
:raises InternalError: if this kernel is not coloured.
@@ -466,16 +482,17 @@ def colourmap(self):
f"coloured loop.")
sched = self.ancestor(InvokeSchedule)
if self.is_intergrid:
- cmap = self._intergrid_ref.colourmap_symbol.name
+ cmap = self._intergrid_ref.colourmap_symbol
else:
try:
- cmap = sched.symbol_table.lookup_with_tag("cmap").name
+ cmap = sched.symbol_table.lookup_with_tag("cmap")
except KeyError:
# We have to do this here as _init_colourmap (which calls this
# method) is only called at code-generation time.
- cmap = sched.symbol_table.find_or_create_array(
- "cmap", 2, ScalarType.Intrinsic.INTEGER,
- tag="cmap").name
+ cmap = sched.symbol_table.find_or_create_tag(
+ "cmap", symbol_type=DataSymbol,
+ datatype=UnsupportedFortranType(
+ "integer(kind=i_def), pointer :: cmap(:,:)"))
return cmap
@@ -527,9 +544,7 @@ def ncolours_var(self):
f"coloured loop.")
if self.is_intergrid:
ncols_sym = self._intergrid_ref.ncolours_var_symbol
- if not ncols_sym:
- return None
- return ncols_sym.name
+ return ncols_sym.name if ncols_sym is not None else None
return self.scope.symbol_table.lookup_with_tag("ncolour").name
@@ -634,12 +649,11 @@ def argument_kinds(self):
return self._argument_kinds
@property
- def gen_stub(self):
+ def gen_stub(self) -> Container:
'''
- Create the fparser1 AST for a kernel stub.
+ Create the PSyIR for a kernel stub.
- :returns: root of fparser1 AST for the stub routine.
- :rtype: :py:class:`fparser.one.block_statements.Module`
+ :returns: the kernel stub root Container.
:raises GenerationError: if the supplied kernel stub does not operate
on a supported subset of the domain (currently only those that
@@ -661,12 +675,22 @@ def gen_stub(self):
f"operate on one of {supported_operates_on} but found "
f"'{self.iterates_over}' in kernel '{self.name}'.")
- # Create an empty PSy layer module
- psy_module = ModuleGen(self._base_name+"_mod")
+ # Create an empty Stub module
+ stub_module = Container(self._base_name+"_mod")
# Create the subroutine
- sub_stub = SubroutineGen(psy_module, name=self._base_name+"_code",
- implicitnone=True)
+ stub_routine = Routine.create(self._base_name+"_code")
+ stub_module.addchild(stub_routine)
+ self._stub_symbol_table = stub_routine.symbol_table
+
+ # Add wildcard "use" statement for all supported argument
+ # kinds (precisions)
+ stub_routine.symbol_table.add(
+ ContainerSymbol(
+ const.UTILITIES_MOD_MAP["constants"]["module"],
+ wildcard_import=True
+ )
+ )
# Add all the declarations
# Import here to avoid circular dependency
@@ -683,24 +707,20 @@ def gen_stub(self):
DynLMAOperators, LFRicStencils, DynBasisFunctions,
DynBoundaryConditions, DynReferenceElement,
LFRicMeshProperties]:
- entities(self).declarations(sub_stub)
+ entities(self).stub_declarations()
- # Add wildcard "use" statement for all supported argument
- # kinds (precisions)
- sub_stub.add(
- UseGen(sub_stub,
- name=const.UTILITIES_MOD_MAP["constants"]["module"]))
-
- # Create the arglist
+ # TODO #2874: The declarations above are not in order, we need to use
+ # the KernStubArgList to generate a list of strings with the correct
+ # order
create_arg_list = KernStubArgList(self)
+ create_arg_list._forced_symtab = stub_routine.symbol_table
create_arg_list.generate()
+ arg_list = []
+ for argument_name in create_arg_list.arglist:
+ arg_list.append(stub_routine.symbol_table.lookup(argument_name))
+ stub_routine.symbol_table.specify_argument_list(arg_list)
- # Add the arglist
- sub_stub.args = create_arg_list.arglist
-
- # Add the subroutine to the parent module
- psy_module.add(sub_stub)
- return psy_module.root
+ return stub_module
def get_kernel_schedule(self):
'''Returns a PSyIR Schedule representing the kernel code. The base
diff --git a/src/psyclone/domain/lfric/lfric_loop.py b/src/psyclone/domain/lfric/lfric_loop.py
index 4de2c32cfb..99ef76c646 100644
--- a/src/psyclone/domain/lfric/lfric_loop.py
+++ b/src/psyclone/domain/lfric/lfric_loop.py
@@ -46,13 +46,14 @@
from psyclone.domain.lfric import LFRicConstants, LFRicKern
from psyclone.domain.lfric.lfric_types import LFRicTypes
from psyclone.errors import GenerationError, InternalError
-from psyclone.f2pygen import CallGen, CommentGen
-from psyclone.psyGen import InvokeSchedule, HaloExchange
-from psyclone.psyir.backend.fortran import FortranWriter
-from psyclone.psyir.nodes import (ArrayReference, ACCRegionDirective, DataNode,
- Loop, Literal, OMPRegionDirective,
- PSyDataNode, Reference, Routine, Schedule)
-from psyclone.psyir.symbols import DataSymbol, INTEGER_TYPE
+from psyclone.psyGen import (
+ InvokeSchedule, HaloExchange, zero_reduction_variables)
+from psyclone.psyir.nodes import (
+ Loop, Literal, Schedule, Reference, ArrayReference, StructureReference,
+ Call, BinaryOperation, ArrayOfStructuresReference, Directive, DataNode,
+ Node)
+from psyclone.psyir.symbols import (
+ DataSymbol, INTEGER_TYPE, UnresolvedType, UnresolvedInterface)
class LFRicLoop(PSyLoop):
@@ -67,6 +68,8 @@ class LFRicLoop(PSyLoop):
:type kwargs: unwrapped dict.
:raises InternalError: if an unrecognised loop_type is specified.
+ :raises InternalError: if a parent that is descendant from an
+ InvokeSchedule is not provided.
'''
# pylint: disable=too-many-instance-attributes
@@ -102,11 +105,24 @@ def __init__(self, loop_type="", **kwargs):
tag, root_name=suggested_name, symbol_type=DataSymbol,
datatype=LFRicTypes("LFRicIntegerScalarDataType")())
- # Pre-initialise the Loop children # TODO: See issue #440
- self.addchild(Literal("NOT_INITIALISED", INTEGER_TYPE,
- parent=self)) # start
- self.addchild(Literal("NOT_INITIALISED", INTEGER_TYPE,
- parent=self)) # stop
+ # Initialise loop bounds
+ ischedule = self.ancestor(InvokeSchedule)
+ if not ischedule:
+ raise InternalError(
+ "LFRic loops must be inside an InvokeSchedule, a parent "
+ "argument is mandatory when they are created.")
+ # The loop bounds names are given by the number of previous LFRic loops
+ # already present in the Schedule. Since this are inserted in order it
+ # will produce sequentially ascending loop bound names.
+ idx = len(ischedule.loops())
+ start_name = f"loop{idx}_start"
+ stop_name = f"loop{idx}_stop"
+ lbound = ischedule.symbol_table.find_or_create_integer_symbol(
+ start_name, tag=start_name)
+ ubound = ischedule.symbol_table.find_or_create_integer_symbol(
+ stop_name, tag=stop_name)
+ self.addchild(Reference(lbound)) # start
+ self.addchild(Reference(ubound)) # stop
self.addchild(Literal("1", INTEGER_TYPE, parent=self)) # step
self.addchild(Schedule(parent=self)) # loop body
@@ -129,6 +145,25 @@ def lower_to_language_level(self):
:rtype: :py:class:`psyclone.psyir.node.Node`
'''
+ if (not Config.get().distributed_memory and
+ all(kern.iterates_over == "halo_cell_column" for
+ kern in self.kernels())):
+ # No distributed memory and thus no halo cells but all kernels
+ # only operate on halo cells => nothing to do.
+ self.detach()
+ return None
+
+ # Get the list of calls (to kernels) that need reduction variables
+ if not self.is_openmp_parallel():
+ calls = self.reductions()
+ zero_reduction_variables(calls)
+
+ # Set halo clean/dirty for all fields that are modified
+ if Config.get().distributed_memory:
+ if self._loop_type != "colour":
+ if self.unique_modified_args("gh_field"):
+ self.gen_mark_halos_clean_dirty()
+
if self._loop_type != "null":
# This is not a 'domain' loop (i.e. there is a real loop). First
@@ -148,6 +183,7 @@ def lower_to_language_level(self):
# Finally create the new lowered Loop and replace the domain one
loop = Loop.create(self._variable, start, stop, step, [])
+ loop.preceding_comment = self.preceding_comment
loop.loop_body._symbol_table = \
self.loop_body.symbol_table.shallow_copy()
loop.children[3] = self.loop_body.copy()
@@ -372,14 +408,9 @@ def upper_bound_halo_depth(self):
'''
return self._upper_bound_halo_depth
- def _lower_bound_fortran(self):
+ def lower_bound_psyir(self) -> Node:
'''
- Create the associated Fortran code for the type of lower bound.
-
- TODO: Issue #440. lower_bound_fortran should generate PSyIR.
-
- :returns: the Fortran code for the lower bound.
- :rtype: str
+ :returns: the PSyIR for this loop lower bound.
:raises GenerationError: if self._lower_bound_name is not "start"
for sequential code.
@@ -392,7 +423,7 @@ def _lower_bound_fortran(self):
f"The lower bound must be 'start' if we are sequential but "
f"found '{self._upper_bound_name}'")
if self._lower_bound_name == "start":
- return "1"
+ return Literal("1", INTEGER_TYPE)
# the start of our space is the end of the previous space +1
if self._lower_bound_name == "inner":
@@ -418,10 +449,15 @@ def _lower_bound_fortran(self):
f"found")
# Use InvokeSchedule SymbolTable to share the same symbol for all
# Loops in the Invoke.
- mesh_obj_name = self.ancestor(InvokeSchedule).symbol_table.\
- find_or_create_tag("mesh").name
- return mesh_obj_name + "%get_last_" + prev_space_name + "_cell(" \
- + prev_space_index_str + ")+1"
+ mesh_obj = self.ancestor(InvokeSchedule).symbol_table.\
+ find_or_create_tag("mesh")
+ call = Call.create(
+ StructureReference.create(
+ mesh_obj, ["get_last_" + prev_space_name + "_cell"]))
+ if prev_space_index_str:
+ call.addchild(Literal(prev_space_index_str, INTEGER_TYPE))
+ return BinaryOperation.create(BinaryOperation.Operator.ADD,
+ call, Literal("1", INTEGER_TYPE))
@property
def _mesh_name(self):
@@ -449,22 +485,17 @@ def _mesh_name(self):
return self.ancestor(InvokeSchedule).symbol_table.\
lookup_with_tag(tag_name).name
- def _upper_bound_fortran(self):
- ''' Create the Fortran code that gives the appropriate upper bound
- value for this type of loop.
-
- TODO: Issue #440. upper_bound_fortran should generate PSyIR.
-
- :returns: Fortran code for the upper bound of this loop.
- :rtype: str
+ def upper_bound_psyir(self) -> Node:
+ '''
+ :returns: the PSyIR for this loop upper bound.
'''
- # pylint: disable=too-many-branches, too-many-return-statements
- # precompute halo_index as a string as we use it in more than
- # one of the if clauses
- halo_index = ""
+ sym_tab = self.ancestor(InvokeSchedule).symbol_table
+
+ # Precompute halo_index as we use it in more than one of the if clauses
+ halo_index = None
if self._upper_bound_halo_depth:
- halo_index = FortranWriter()(self._upper_bound_halo_depth)
+ halo_index = self._upper_bound_halo_depth
if self._upper_bound_name == "ncolours":
# Loop over colours
@@ -479,7 +510,7 @@ def _upper_bound_fortran(self):
raise InternalError(
f"All kernels within a loop over colours must have "
f"been coloured but kernel '{kern.name}' has not")
- return kernels[0].ncolours_var
+ return Reference(sym_tab.lookup(kernels[0].ncolours_var))
if self._upper_bound_name == "ncolour":
# Loop over cells of a particular colour when DM is disabled.
@@ -488,64 +519,91 @@ def _upper_bound_fortran(self):
root_name = "last_edge_cell_all_colours"
if self._kern.is_intergrid:
root_name += "_" + self._field_name
- sym = self.ancestor(
- InvokeSchedule).symbol_table.find_or_create_tag(root_name)
- return f"{sym.name}(colour)"
+ sym = sym_tab.find_or_create_tag(root_name)
+ colour = sym_tab.lookup_with_tag("colours_loop_idx")
+ return ArrayReference.create(sym, [Reference(colour)])
if self._upper_bound_name == "colour_halo":
# Loop over cells of a particular colour when DM is enabled. The
# LFRic API used here allows for colouring with redundant
# computation.
- sym_tab = self.ancestor(InvokeSchedule).symbol_table
if halo_index:
# The colouring API provides a 2D array that holds the last
# halo cell for a given colour and halo depth.
- depth = halo_index
+ depth = halo_index.copy()
else:
# If no depth is specified then we go to the full halo depth
- depth = sym_tab.find_or_create_tag(
- f"max_halo_depth_{self._mesh_name}").name
+ depth = Reference(sym_tab.find_or_create_tag(
+ f"max_halo_depth_{self._mesh_name}"))
root_name = "last_halo_cell_all_colours"
if self._kern.is_intergrid:
root_name += "_" + self._field_name
sym = sym_tab.find_or_create_tag(root_name)
- return f"{sym.name}(colour, {depth})"
+ colour = Reference(sym_tab.lookup_with_tag("colours_loop_idx"))
+ return ArrayReference.create(sym, [colour, depth])
if self._upper_bound_name in ["ndofs", "nannexed"]:
if Config.get().distributed_memory:
if self._upper_bound_name == "ndofs":
- result = (
- f"{self.field.proxy_name_indexed}%"
- f"{self.field.ref_name()}%get_last_dof_owned()")
- else: # nannexed
- result = (
- f"{self.field.proxy_name_indexed}%"
- f"{self.field.ref_name()}%get_last_dof_annexed()")
+ method = "get_last_dof_owned"
+ else:
+ method = "get_last_dof_annexed"
+ result = Call.create(
+ StructureReference.create(
+ sym_tab.lookup(self.field.proxy_name_indexed),
+ [self.field.ref_name(), method]
+ )
+ )
else:
- result = self._kern.undf_name
+ result = Reference(sym_tab.lookup(self._kern.undf_name))
return result
if self._upper_bound_name == "ncells":
if Config.get().distributed_memory:
- result = f"{self._mesh_name}%get_last_edge_cell()"
+ result = Call.create(
+ StructureReference.create(
+ sym_tab.lookup(self._mesh_name),
+ ["get_last_edge_cell"]
+ )
+ )
else:
- result = (f"{self.field.proxy_name_indexed}%"
- f"{self.field.ref_name()}%get_ncell()")
+ result = self.field.generate_method_call("get_ncell")
return result
if self._upper_bound_name == "cell_halo":
if Config.get().distributed_memory:
- return f"{self._mesh_name}%get_last_halo_cell({halo_index})"
+ result = Call.create(
+ StructureReference.create(
+ sym_tab.lookup(self._mesh_name),
+ ["get_last_halo_cell"]
+ )
+ )
+ if halo_index:
+ result.addchild(halo_index.copy())
+ return result
raise GenerationError(
"'cell_halo' is not a valid loop upper bound for "
"sequential/shared-memory code")
if self._upper_bound_name == "dof_halo":
if Config.get().distributed_memory:
- return (f"{self.field.proxy_name_indexed}%"
- f"{self.field.ref_name()}%get_last_dof_halo("
- f"{halo_index})")
+ result = Call.create(
+ StructureReference.create(
+ sym_tab.lookup(self.field.proxy_name_indexed),
+ [self.field.ref_name(), "get_last_dof_halo"]
+ )
+ )
+ if halo_index:
+ result.addchild(halo_index.copy())
+ return result
raise GenerationError(
"'dof_halo' is not a valid loop upper bound for "
"sequential/shared-memory code")
if self._upper_bound_name == "inner":
if Config.get().distributed_memory:
- return f"{self._mesh_name}%get_last_inner_cell({halo_index})"
+ result = Call.create(
+ StructureReference.create(
+ sym_tab.lookup(self._mesh_name),
+ ["get_last_inner_cell"]
+ )
+ )
+ result.addchild(halo_index)
+ return result
raise GenerationError(
"'inner' is not a valid loop upper bound for "
"sequential/shared-memory code")
@@ -798,155 +856,31 @@ def create_halo_exchanges(self):
# or not
self._add_halo_exchange(halo_field)
- @property
- def start_expr(self):
- '''
- :returns: the PSyIR for the lower bound of this loop.
- :rtype: :py:class:`psyclone.psyir.Node`
-
- '''
- inv_sched = self.ancestor(Routine)
- sym_table = inv_sched.symbol_table
- loops = inv_sched.loops()
- posn = None
- for index, loop in enumerate(loops):
- if loop is self:
- posn = index
- break
- root_name = f"loop{posn}_start"
- lbound = sym_table.find_or_create_integer_symbol(root_name,
- tag=root_name)
- self.children[0] = Reference(lbound)
- return self.children[0]
-
- @property
- def stop_expr(self):
- '''
- :returns: the PSyIR for the upper bound of this loop.
- :rtype: :py:class:`psyclone.psyir.Node`
-
- '''
- inv_sched = self.ancestor(Routine)
- sym_table = inv_sched.symbol_table
-
- if self._loop_type == "colour":
- # If this loop is over all cells of a given colour then we must
- # lookup the loop bound as it depends on the current colour.
- parent_loop = self.ancestor(Loop)
- colour_var = parent_loop.variable
-
- asym = self.kernel.last_cell_all_colours_symbol
- const = LFRicConstants()
-
- if self.upper_bound_name in const.HALO_ACCESS_LOOP_BOUNDS:
- if self._upper_bound_halo_depth:
- halo_depth = self._upper_bound_halo_depth.copy()
- else:
- # We need to go to the full depth of the halo.
- root_name = "mesh"
- if self.kernels()[0].is_intergrid:
- root_name += f"_{self._field_name}"
- depth_sym = sym_table.lookup_with_tag(
- f"max_halo_depth_{root_name}")
- halo_depth = Reference(depth_sym)
-
- return ArrayReference.create(asym, [Reference(colour_var),
- halo_depth])
- return ArrayReference.create(asym, [Reference(colour_var)])
-
- # This isn't a 'colour' loop so we have already set-up a
- # variable that holds the upper bound.
- loops = inv_sched.loops()
- posn = None
- for index, loop in enumerate(loops):
- if loop is self:
- posn = index
- break
- root_name = f"loop{posn}_stop"
- ubound = sym_table.find_or_create_integer_symbol(root_name,
- tag=root_name)
- self.children[1] = Reference(ubound)
- return self.children[1]
-
- def gen_code(self, parent):
- ''' Call the base class to generate the code and then add any
- required halo exchanges.
-
- :param parent: an f2pygen object that will be the parent of \
- f2pygen objects created in this method.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
-
-
- '''
- # pylint: disable=too-many-statements, too-many-branches
- if (not Config.get().distributed_memory and
- all(kern.iterates_over == "halo_cell_column" for
- kern in self.kernels())):
- # No distributed memory and thus no halo cells but all kernels
- # only operate on halo cells => nothing to do.
- return
-
- # Check that we're not within an OpenMP parallel region if
- # we are a loop over colours.
- if self._loop_type == "colours" and self.is_openmp_parallel():
- raise GenerationError("Cannot have a loop over colours within an "
- "OpenMP parallel region.")
-
- super().gen_code(parent)
-
- for psydata_node in self.walk(PSyDataNode):
- psydata_node.fix_gen_code(parent)
-
- # TODO #1010: gen_code of this loop calls the PSyIR lowering version,
- # but that method can not currently provide sibling nodes because the
- # ancestor is not PSyIR, so for now we leave the remainder of the
- # gen_code logic here instead of removing the whole method.
-
- if not (Config.get().distributed_memory and
- self._loop_type != "colour"):
- # No need to add halo exchanges so we are done.
- return
-
- # Set halo clean/dirty for all fields that are modified
- if not self.unique_modified_args("gh_field"):
- return
-
- if self.ancestor((ACCRegionDirective, OMPRegionDirective)):
- # We cannot include calls to set halos dirty/clean within OpenACC
- # or OpenMP regions. This is handled by the appropriate Directive
- # class instead.
- # TODO #1755 can this check be made more general (e.g. to include
- # Extraction regions)?
- return
-
- parent.add(CommentGen(parent, ""))
- if self._loop_type != "null":
- prev_node_name = "loop"
- else:
- prev_node_name = "kernel"
- parent.add(CommentGen(parent, f" Set halos dirty/clean for fields "
- f"modified in the above {prev_node_name}"))
- parent.add(CommentGen(parent, ""))
-
- self.gen_mark_halos_clean_dirty(parent)
-
- parent.add(CommentGen(parent, ""))
-
- def gen_mark_halos_clean_dirty(self, parent):
+ def gen_mark_halos_clean_dirty(self):
'''
Generates the necessary code to mark halo regions for all modified
fields as clean or dirty following execution of this loop.
-
- :param parent: the node in the f2pygen AST to which to add content.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
-
'''
# Set halo clean/dirty for all fields that are modified
fields = self.unique_modified_args("gh_field")
+ sym_table = self.ancestor(InvokeSchedule).symbol_table
+ insert_loc = self
+ # If it has ancestor directive keep going up
+ while isinstance(insert_loc.parent.parent, Directive):
+ insert_loc = insert_loc.parent.parent
+ cursor = insert_loc.position
+ insert_loc = insert_loc.parent
+ init_cursor = cursor
+
# First set all of the halo dirty unless we are
# subsequently going to set all of the halo clean
for field in fields:
+ field_symbol = sym_table.find_or_create(
+ field.proxy_name,
+ symbol_type=DataSymbol,
+ datatype=UnresolvedType(),
+ interface=UnresolvedInterface())
# Avoid circular import
# pylint: disable=import-outside-toplevel
from psyclone.dynamo0p3 import HaloWriteAccess
@@ -959,28 +893,49 @@ def gen_mark_halos_clean_dirty(self, parent):
# the range function below returns values from 1 to the
# vector size which is what we require in our Fortran code
for index in range(1, field.vector_size+1):
- parent.add(CallGen(parent, name=field.proxy_name +
- f"({index})%set_dirty()"))
+ idx_literal = Literal(str(index), INTEGER_TYPE)
+ call = Call.create(ArrayOfStructuresReference.create(
+ field_symbol, [idx_literal], ["set_dirty"]))
+ cursor += 1
+ insert_loc.addchild(call, cursor)
else:
- parent.add(CallGen(parent, name=field.proxy_name +
- "%set_dirty()"))
+ call = Call.create(StructureReference.create(
+ field_symbol, ["set_dirty"]))
+ cursor += 1
+ insert_loc.addchild(call, cursor)
+
# Now set appropriate parts of the halo clean where redundant
# computation has been performed or a kernel is written to operate
# on halo cells.
clean_depth = hwa.clean_depth
if clean_depth:
- depth_str = FortranWriter()(clean_depth)
if field.vector_size > 1:
# The range function below returns values from 1 to the
# vector size, as required in our Fortran code.
for index in range(1, field.vector_size+1):
- parent.add(CallGen(
- parent, name=f"{field.proxy_name}({index})%"
- f"set_clean({depth_str})"))
+ set_clean = Call.create(
+ ArrayOfStructuresReference.create(
+ field_symbol,
+ [Literal(str(index), INTEGER_TYPE)],
+ ["set_clean"]))
+ set_clean.addchild(clean_depth.copy())
+ cursor += 1
+ insert_loc.addchild(set_clean, cursor)
else:
- parent.add(CallGen(
- parent, name=f"{field.proxy_name}%set_clean("
- f"{depth_str})"))
+ set_clean = Call.create(
+ StructureReference.create(
+ field_symbol, ["set_clean"]))
+ set_clean.addchild(clean_depth.copy())
+ cursor += 1
+ insert_loc.addchild(set_clean, cursor)
+
+ if cursor > init_cursor:
+ for child in insert_loc.children[init_cursor:]:
+ if child.preceding_comment.startswith("Set halos dirty"):
+ child.preceding_comment = ""
+ insert_loc[init_cursor + 1].preceding_comment = (
+ "Set halos dirty/clean for fields modified in the above "
+ "loop(s)")
def independent_iterations(self,
test_all_variables=False,
@@ -1030,9 +985,9 @@ def independent_iterations(self,
return True
except (InternalError, KeyError):
# LFRic still has symbols that don't exist in the symbol_table
- # until the gen_code() step, so the dependency analysis raises
+ # until the lowering step, so the dependency analysis raises
# errors in some cases.
- # TODO #1648 - when a transformation colours a loop we must
+ # TODO #2874 - when a transformation colours a loop we must
# ensure "last_[halo]_cell_all_colours" is added to the symbol
# table.
return True
diff --git a/src/psyclone/domain/lfric/lfric_loop_bounds.py b/src/psyclone/domain/lfric/lfric_loop_bounds.py
index df845204e2..dc50848872 100644
--- a/src/psyclone/domain/lfric/lfric_loop_bounds.py
+++ b/src/psyclone/domain/lfric/lfric_loop_bounds.py
@@ -39,9 +39,8 @@
''' This module provides the LFRicLoopBounds Class that handles all variables
required for specifying loop limits within an LFRic PSy-layer routine.'''
-from psyclone.configuration import Config
-from psyclone.domain.lfric import LFRicCollection
-from psyclone.f2pygen import AssignGen, CommentGen, DeclGen
+from psyclone.domain.lfric import LFRicCollection, LFRicLoop
+from psyclone.psyir.nodes import Assignment, Reference
class LFRicLoopBounds(LFRicCollection):
@@ -50,61 +49,65 @@ class LFRicLoopBounds(LFRicCollection):
an LFRic PSy-layer routine.
'''
- def _invoke_declarations(self, parent):
+ def initialise(self, cursor: int) -> int:
'''
- Only needed because method is virtual in parent class.
-
- :param parent: the f2pygen node representing the PSy-layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
- '''
-
- def initialise(self, parent):
- '''
- Updates the f2pygen AST so that all of the variables holding the lower
+ Updates the PSyIR so that all of the variables holding the lower
and upper bounds of all loops in an Invoke are initialised.
- :param parent: the f2pygen node representing the PSy-layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
'''
- loops = self._invoke.schedule.loops()
-
- if not loops:
- return
-
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " Set-up all of the loop bounds"))
- parent.add(CommentGen(parent, ""))
-
- sym_table = self._invoke.schedule.symbol_table
- config = Config.get()
- api_config = config.api_conf("lfric")
+ loops = filter(lambda x: isinstance(x, LFRicLoop),
+ self._invoke.schedule.loops())
+ first = True
for idx, loop in enumerate(loops):
if loop.loop_type == "null":
- # 'null' loops don't need any bounds.
+ # Generic or 'null' loops don't need any variables to be set
continue
+ # Set the lower bound
root_name = f"loop{idx}_start"
- lbound = sym_table.find_or_create_integer_symbol(root_name,
- tag=root_name)
- parent.add(AssignGen(parent, lhs=lbound.name,
- rhs=loop._lower_bound_fortran()))
- entities = [lbound.name]
-
+ lbound = self.symtab.find_or_create_integer_symbol(
+ root_name, tag=root_name)
+ assignment = Assignment.create(
+ lhs=Reference(lbound),
+ rhs=loop.lower_bound_psyir())
+ loop.children[0] = Reference(lbound)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
+ if first:
+ assignment.preceding_comment = (
+ "Set-up all of the loop bounds")
+ first = False
+
+ # Set the upper bound
if loop.loop_type != "colour":
root_name = f"loop{idx}_stop"
- ubound = sym_table.find_or_create_integer_symbol(root_name,
- tag=root_name)
- entities.append(ubound.name)
- parent.add(AssignGen(parent, lhs=ubound.name,
- rhs=loop._upper_bound_fortran()))
+ ubound = self.symtab.find_or_create_integer_symbol(
+ root_name, tag=root_name)
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(ubound),
+ rhs=loop.upper_bound_psyir()
+ ), cursor)
+ cursor += 1
+ loop.children[1] = Reference(ubound)
+ else:
+ # If it needs a color look-up, it has to be in-place
+ loop.children[1] = loop.upper_bound_psyir()
+ # TODO #898: We need to remove the now unneeded symbol (because
+ # LFRic is compiled with "no uninitialised variables" error,
+ # but SymbolTable.remove() is still not implemented for
+ # DataSymbols)
+ root_name = f"loop{idx}_stop"
+ if root_name in self.symtab:
+ self.symtab._symbols.pop(root_name)
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=entities))
+ return cursor
# ---------- Documentation utils -------------------------------------------- #
diff --git a/src/psyclone/domain/lfric/lfric_psy.py b/src/psyclone/domain/lfric/lfric_psy.py
index f49737c2e7..7ad66bf297 100644
--- a/src/psyclone/domain/lfric/lfric_psy.py
+++ b/src/psyclone/domain/lfric/lfric_psy.py
@@ -41,14 +41,13 @@
that LFRic-specific PSy module code is generated.
'''
-from collections import OrderedDict
from psyclone.configuration import Config
from psyclone.domain.lfric import (LFRicConstants, LFRicSymbolTable,
LFRicInvokes)
-from psyclone.f2pygen import ModuleGen, UseGen, PSyIRGen
-from psyclone.psyGen import PSy, InvokeSchedule
+from psyclone.psyGen import PSy
from psyclone.psyir.nodes import ScopingNode
+from psyclone.psyir.symbols import ContainerSymbol
class LFRicPSy(PSy):
@@ -69,43 +68,16 @@ def __init__(self, invoke_info):
ScopingNode._symbol_table_class = LFRicSymbolTable
Config.get().api = "lfric"
PSy.__init__(self, invoke_info)
- self._invokes = LFRicInvokes(invoke_info.calls, self)
- # Initialise the dictionary that holds the names of the required
- # LFRic constants, data structures and data structure proxies for
- # the "use" statements in modules that contain PSy-layer routines.
+
+ # Add a wildcard "constants_mod" import at the Container level
+ # since kinds are often disconnected.
const = LFRicConstants()
const_mod = const.UTILITIES_MOD_MAP["constants"]["module"]
- infmod_list = [const_mod]
- # Add all field and operator modules that might be used in the
- # algorithm layer. These do not appear in the code unless a
- # variable is added to the "only" part of the
- # '_infrastructure_modules' map.
- for data_type_info in const.DATA_TYPE_MAP.values():
- infmod_list.append(data_type_info["module"])
-
- # This also removes any duplicates from infmod_list
- self._infrastructure_modules = OrderedDict(
- (k, set()) for k in infmod_list)
+ self.container.symbol_table.add(
+ ContainerSymbol(const_mod, wildcard_import=True))
- kind_names = set()
-
- # The infrastructure declares integer types with default
- # precision so always add this.
- api_config = Config.get().api_conf("lfric")
- kind_names.add(api_config.default_kind["integer"])
-
- # Datatypes declare precision information themselves. However,
- # that is not the case for literals. Therefore deal
- # with these separately here.
- for invoke in self.invokes.invoke_list:
- schedule = invoke.schedule
- for kernel in schedule.kernels():
- for arg in kernel.args:
- if arg.is_literal:
- kind_names.add(arg.precision)
- # Add precision names to the dictionary storing the required
- # LFRic constants.
- self._infrastructure_modules[const_mod] = kind_names
+ # Then initialise the Invokes
+ self._invokes = LFRicInvokes(invoke_info.calls, self)
@property
def name(self):
@@ -127,56 +99,6 @@ def orig_name(self):
'''
return self._name
- @property
- def infrastructure_modules(self):
- '''
- :returns: the dictionary that holds the names of the required
- LFRic infrastructure modules to create "use"
- statements in the PSy-layer modules.
- :rtype: dict of set
-
- '''
- return self._infrastructure_modules
-
- @property
- def gen(self):
- '''
- Generate PSy code for the LFRic API.
-
- :returns: root node of generated Fortran AST.
- :rtype: :py:class:`psyir.nodes.Node`
-
- '''
- # Create an empty PSy layer module
- psy_module = ModuleGen(self.name)
-
- # If the container has a Routine that is not an InvokeSchedule
- # it should also be added to the generated module.
- for routine in self.container.children:
- if not isinstance(routine, InvokeSchedule):
- psy_module.add(PSyIRGen(psy_module, routine))
-
- # Add all invoke-specific information
- self.invokes.gen_code(psy_module)
-
- # Include required constants and infrastructure modules. The sets of
- # required LFRic data structures and their proxies are updated in
- # the relevant field and operator subclasses of LFRicCollection.
- # Here we sort the inputs in reverse order to have "_type" before
- # "_proxy_type" and "operator_" before "columnwise_operator_".
- # We also iterate through the dictionary in reverse order so the
- # "use" statements for field types are before the "use" statements
- # for operator types.
- for infmod in reversed(self._infrastructure_modules):
- if self._infrastructure_modules[infmod]:
- infmod_types = sorted(
- list(self._infrastructure_modules[infmod]), reverse=True)
- psy_module.add(UseGen(psy_module, name=infmod,
- only=True, funcnames=infmod_types))
-
- # Return the root node of the generated code
- return psy_module.root
-
# ---------- Documentation utils -------------------------------------------- #
# The list of module members that we wish AutoAPI to generate
diff --git a/src/psyclone/domain/lfric/lfric_run_time_checks.py b/src/psyclone/domain/lfric/lfric_run_time_checks.py
index 97bdfd68a6..7b1868e57e 100644
--- a/src/psyclone/domain/lfric/lfric_run_time_checks.py
+++ b/src/psyclone/domain/lfric/lfric_run_time_checks.py
@@ -46,7 +46,12 @@
from psyclone.configuration import Config
from psyclone.core import AccessType
from psyclone.domain.lfric import LFRicCollection, LFRicConstants
-from psyclone.f2pygen import CallGen, CommentGen, IfThenGen, UseGen
+from psyclone.psyir.symbols import (
+ CHARACTER_TYPE, ContainerSymbol, RoutineSymbol, ImportInterface,
+ DataSymbol, UnresolvedType, INTEGER_TYPE)
+from psyclone.psyir.nodes import (
+ Call, StructureReference, BinaryOperation, Reference, Literal, IfBlock,
+ ArrayOfStructuresReference)
class LFRicRunTimeChecks(LFRicCollection):
@@ -55,41 +60,41 @@ class LFRicRunTimeChecks(LFRicCollection):
'''
- def _invoke_declarations(self, parent):
+ def invoke_declarations(self):
'''Insert declarations of all data and functions required by the
run-time checks code into the PSy layer.
- :param parent: the node in the f2pygen AST representing the PSy- \
- layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
+ super().invoke_declarations()
if Config.get().api_conf("lfric").run_time_checks:
# Only add if run-time checks are requested
const = LFRicConstants()
- parent.add(
- UseGen(parent, name=const.
- FUNCTION_SPACE_TYPE_MAP["fs_continuity"]["module"]))
- parent.add(UseGen(parent, name=const.
- UTILITIES_MOD_MAP["logging"]["module"],
- only=True,
- funcnames=["log_event", "LOG_LEVEL_ERROR"]))
-
- def _check_field_fs(self, parent):
+ csym = self.symtab.find_or_create(
+ const.UTILITIES_MOD_MAP["logging"]["module"],
+ symbol_type=ContainerSymbol
+ )
+ self.symtab.find_or_create(
+ "log_event", symbol_type=RoutineSymbol,
+ interface=ImportInterface(csym)
+ )
+ self.symtab.find_or_create(
+ "LOG_LEVEL_ERROR", symbol_type=DataSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(csym)
+ )
+
+ def _check_field_fs(self, cursor: int) -> int:
'''
Internal method that adds run-time checks to make sure that the
field's function space is consistent with the appropriate
kernel metadata function spaces.
- :param parent: the node in the f2pygen AST representing the PSy- \
- layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param int cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
+ :rtype: int
'''
- parent.add(CommentGen(
- parent, " Check field function space and kernel metadata "
- "function spaces are compatible"))
-
# When issue #30 is addressed (with issue #79 helping further)
# we may know some or all field function spaces statically. If
# so, we should remove these from the fields to check at run
@@ -97,7 +102,9 @@ def _check_field_fs(self, parent):
# generation time).
const = LFRicConstants()
+ symtab = self._invoke.schedule.symbol_table
existing_checks = []
+ first = True
for kern_call in self._invoke.schedule.kernels():
for arg in kern_call.arguments.args:
if not arg.is_field:
@@ -129,21 +136,55 @@ def _check_field_fs(self, parent):
# We need to check against a specific function space
function_space_names = [fs_name]
- if_condition = " .and. ".join(
- [f"{field_name}%which_function_space() /= {name.upper()}"
- for name in function_space_names])
- if_then = IfThenGen(parent, if_condition)
- call_abort = CallGen(
- if_then, "log_event(\"In alg "
- f"'{self._invoke.invokes.psy.orig_name}' invoke "
- f"'{self._invoke.name}', the field '{arg.name}' is passed "
- f"to kernel '{kern_call.name}' but its function space is "
- f"not compatible with the function space specified in the "
- f"kernel metadata '{fs_name}'.\", LOG_LEVEL_ERROR)")
- if_then.add(call_abort)
- parent.add(if_then)
-
- def _check_field_ro(self, parent):
+ field_symbol = symtab.lookup(arg.name)
+
+ if_condition = None
+ for name in function_space_names:
+ if arg._vector_size > 1:
+ call = Call.create(ArrayOfStructuresReference.create(
+ field_symbol, [Literal('1', INTEGER_TYPE)],
+ ["which_function_space"]))
+ else:
+ call = Call.create(StructureReference.create(
+ field_symbol, ["which_function_space"]))
+ mod_symbol = symtab.find_or_create(
+ "fs_continuity_mod", symbol_type=ContainerSymbol)
+ symbol = symtab.find_or_create(
+ name.upper(),
+ interface=ImportInterface(mod_symbol))
+ cmp = BinaryOperation.create(
+ BinaryOperation.Operator.NE,
+ call, Reference(symbol)
+ )
+ if if_condition is None:
+ if_condition = cmp
+ else:
+ if_condition = BinaryOperation.create(
+ BinaryOperation.Operator.AND, if_condition, cmp
+ )
+
+ if_body = Call.create(
+ symtab.lookup("log_event"),
+ [Literal(f"In alg '{self._invoke.invokes.psy.orig_name}' "
+ f"invoke '{self._invoke.name}', the field "
+ f"'{arg.name}' is passed to kernel "
+ f"'{kern_call.name}' but its function space is "
+ f"not compatible with the function space "
+ f"specified in the kernel metadata '{fs_name}'.",
+ CHARACTER_TYPE),
+ Reference(symtab.lookup("LOG_LEVEL_ERROR"))])
+
+ ifblock = IfBlock.create(if_condition, [if_body])
+ self._invoke.schedule.addchild(ifblock, cursor)
+ cursor += 1
+ if first:
+ ifblock.preceding_comment = (
+ "Check field function space and kernel metadata "
+ "function spaces are compatible")
+ first = False
+ return cursor
+
+ def _check_field_ro(self, cursor: int) -> int:
'''
Internal method that adds runtime checks to make sure that if the
field is on a read-only function space then the associated
@@ -160,11 +201,12 @@ def _check_field_ro(self, parent):
not be picked up where the error occured. Therefore adding
checks here is still useful.
- :param parent: the node in the f2pygen AST representing the PSy- \
- layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
'''
+ symtab = self._invoke.schedule.symbol_table
# When issue #30 is addressed (with issue #79 helping further)
# we may know some or all field function spaces statically. If
# so, we should remove these from the fields to check at run
@@ -180,52 +222,61 @@ def _check_field_ro(self, parent):
not [entry for entry in modified_fields if
entry[0].name == arg.name]):
modified_fields.append((arg, call))
- if modified_fields:
- parent.add(CommentGen(
- parent, " Check that read-only fields are not modified"))
+ first = True
for field, call in modified_fields:
- if_then = IfThenGen(
- parent, f"{field.proxy_name_indexed}%vspace%is_readonly()")
- call_abort = CallGen(
- if_then, "log_event(\"In alg "
- f"'{self._invoke.invokes.psy.orig_name}' invoke "
- f"'{self._invoke.name}', field '{field.name}' is on a "
- f"read-only function space but is modified by kernel "
- f"'{call.name}'.\", LOG_LEVEL_ERROR)")
- if_then.add(call_abort)
- parent.add(if_then)
-
- def initialise(self, parent):
+ if_condition = field.generate_method_call("is_readonly")
+ if_body = Call.create(
+ symtab.lookup("log_event"),
+ [Literal(f"In alg '{self._invoke.invokes.psy.orig_name}' "
+ f"invoke '{self._invoke.name}', field '{field.name}' "
+ f"is on a read-only function space but is modified "
+ f"by kernel '{call.name}'.", CHARACTER_TYPE),
+ Reference(symtab.lookup("LOG_LEVEL_ERROR"))])
+
+ ifblock = IfBlock.create(if_condition, [if_body])
+ self._invoke.schedule.addchild(ifblock, cursor)
+ cursor += 1
+ if first:
+ ifblock.preceding_comment = (
+ "Check that read-only fields are not modified")
+ first = False
+ return cursor
+
+ def initialise(self, cursor: int) -> int:
'''Add runtime checks to make sure that the arguments being passed
from the algorithm layer are consistent with the metadata
specified in the associated kernels. Currently checks are
limited to ensuring that field function spaces are consistent
with the associated kernel function-space metadata.
- :param parent: the node in the f2pygen AST representing the PSy- \
- layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
'''
if not Config.get().api_conf("lfric").run_time_checks:
# Run-time checks are not requested.
- return
+ return cursor
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " Perform run-time checks"))
- parent.add(CommentGen(parent, ""))
+ init_cursor = cursor
# Check that field function spaces are compatible with the
# function spaces specified in the kernel metadata.
- self._check_field_fs(parent)
+ cursor = self._check_field_fs(cursor)
# Check that fields on read-only function spaces are not
# passed into a kernel where the kernel metadata specifies
# that the field will be modified.
- self._check_field_ro(parent)
+ cursor = self._check_field_ro(cursor)
+
+ self._invoke.schedule[init_cursor].preceding_comment = (
+ "Perform run-time checks\n"
+ + self._invoke.schedule[init_cursor].preceding_comment
+ )
# These checks should be expanded. Issue #768 suggests
# extending function space checks to operators.
+ return cursor
# ---------- Documentation utils -------------------------------------------- #
diff --git a/src/psyclone/domain/lfric/lfric_scalar_args.py b/src/psyclone/domain/lfric/lfric_scalar_args.py
index 856c25ae59..8c394a1ac4 100644
--- a/src/psyclone/domain/lfric/lfric_scalar_args.py
+++ b/src/psyclone/domain/lfric/lfric_scalar_args.py
@@ -45,10 +45,11 @@
# Imports
from collections import OrderedDict, Counter
-from psyclone.domain.lfric import LFRicCollection, LFRicConstants
+from psyclone.psyir.frontend.fparser2 import INTENT_MAPPING
+from psyclone.domain.lfric import LFRicCollection, LFRicConstants, LFRicTypes
from psyclone.errors import GenerationError, InternalError
-from psyclone.f2pygen import DeclGen
from psyclone.psyGen import FORTRAN_INTENT_NAMES
+from psyclone.psyir.symbols import DataSymbol, ArgumentInterface
# pylint: disable=too-many-lines
# pylint: disable=too-many-locals
@@ -81,14 +82,11 @@ def __init__(self, node):
self._integer_scalars[intent] = []
self._logical_scalars[intent] = []
- def _invoke_declarations(self, parent):
+ def invoke_declarations(self):
'''
Create argument lists and declarations for all scalar arguments
in an Invoke.
- :param parent: the f2pygen node representing the PSy-layer routine \
- to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
:raises InternalError: for unsupported argument intrinsic types.
:raises GenerationError: if the same scalar argument has different \
@@ -96,6 +94,7 @@ def _invoke_declarations(self, parent):
within the same Invoke.
'''
+ super().invoke_declarations()
# Create dictionary of all scalar arguments for checks
const = LFRicConstants()
self._scalar_args = self._invoke.unique_declns_by_intent(
@@ -142,22 +141,19 @@ def _invoke_declarations(self, parent):
f"different kernels. This is invalid.")
# Create declarations
- self._create_declarations(parent)
+ self._create_declarations()
- def _stub_declarations(self, parent):
+ def stub_declarations(self):
'''
Create and add declarations for all scalar arguments in
a Kernel stub.
- :param parent: node in the f2pygen AST representing the Kernel stub \
- to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
:raises InternalError: for an unsupported argument data type.
'''
+ super().stub_declarations()
# Extract all scalar arguments
- for arg in self._calls[0].arguments.args:
+ for arg in self.kernel_calls[0].arguments.args:
if arg.is_scalar:
self._scalar_args[arg.intent].append(arg)
@@ -179,25 +175,13 @@ def _stub_declarations(self, parent):
f"are {const.VALID_SCALAR_DATA_TYPES}.")
# Create declarations
- self._create_declarations(parent)
-
- def _create_declarations(self, parent):
- '''Add declarations for the scalar arguments.
-
- :param parent: the f2pygen node in which to insert declarations \
- (Invoke or Kernel).
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ self._create_declarations()
- :raises InternalError: if neither self._invoke nor \
- self._kernel are set.
+ def _create_declarations(self):
+ '''
+ Add declarations for the scalar arguments.
'''
- const = LFRicConstants()
- const_mod = const.UTILITIES_MOD_MAP["constants"]["module"]
- const_mod_uses = None
- if self._invoke:
- const_mod_uses = self._invoke.invokes.psy.infrastructure_modules[
- const_mod]
# Real scalar arguments
for intent in FORTRAN_INTENT_NAMES:
if self._real_scalars[intent]:
@@ -213,67 +197,39 @@ def _create_declarations(self, parent):
real_scalars_precision_map[
real_scalar.precision] = [real_scalar]
# Declare scalars
- for real_scalar_kind, real_scalars_list in \
- real_scalars_precision_map.items():
- real_scalar_type = real_scalars_list[0].intrinsic_type
- real_scalar_names = [arg.declaration_name for arg
- in real_scalars_list]
- parent.add(
- DeclGen(parent, datatype=real_scalar_type,
- kind=real_scalar_kind,
- entity_decls=real_scalar_names,
- intent=intent))
- if self._invoke:
- const_mod_uses.add(real_scalar_kind)
- elif self._kernel:
- self._kernel.argument_kinds.add(real_scalar_kind)
- else:
- raise InternalError(
- "Expected the declaration of real scalar kernel "
- "arguments to be for either an invoke or a "
- "kernel stub, but it is neither.")
+ for real_scalars_list in real_scalars_precision_map.values():
+ for arg in real_scalars_list:
+ symbol = self.symtab.find_or_create(
+ arg.declaration_name,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicRealScalarDataType")())
+ symbol.interface = ArgumentInterface(
+ INTENT_MAPPING[intent])
+ self.symtab.append_argument(symbol)
# Integer scalar arguments
for intent in FORTRAN_INTENT_NAMES:
if self._integer_scalars[intent]:
- dtype = self._integer_scalars[intent][0].intrinsic_type
- dkind = self._integer_scalars[intent][0].precision
- integer_scalar_names = [arg.declaration_name for arg
- in self._integer_scalars[intent]]
- parent.add(
- DeclGen(parent, datatype=dtype, kind=dkind,
- entity_decls=integer_scalar_names,
- intent=intent))
- if self._invoke:
- const_mod_uses.add(dkind)
- elif self._kernel:
- self._kernel.argument_kinds.add(dkind)
- else:
- raise InternalError(
- "Expected the declaration of integer scalar kernel "
- "arguments to be for either an invoke or a "
- "kernel stub, but it is neither.")
+ for arg in self._integer_scalars[intent]:
+ symbol = self.symtab.find_or_create(
+ arg.declaration_name,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ symbol.interface = ArgumentInterface(
+ INTENT_MAPPING[intent])
+ self.symtab.append_argument(symbol)
# Logical scalar arguments
for intent in FORTRAN_INTENT_NAMES:
if self._logical_scalars[intent]:
- dtype = self._logical_scalars[intent][0].intrinsic_type
- dkind = self._logical_scalars[intent][0].precision
- logical_scalar_names = [arg.declaration_name for arg
- in self._logical_scalars[intent]]
- parent.add(
- DeclGen(parent, datatype=dtype, kind=dkind,
- entity_decls=logical_scalar_names,
- intent=intent))
- if self._invoke:
- const_mod_uses.add(dkind)
- elif self._kernel:
- self._kernel.argument_kinds.add(dkind)
- else:
- raise InternalError(
- "Expected the declaration of logical scalar kernel "
- "arguments to be for either an invoke or a "
- "kernel stub, but it is neither.")
+ for arg in self._logical_scalars[intent]:
+ symbol = self.symtab.find_or_create(
+ arg.declaration_name,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicLogicalScalarDataType")())
+ symbol.interface = ArgumentInterface(
+ INTENT_MAPPING[intent])
+ self.symtab.append_argument(symbol)
# ---------- Documentation utils -------------------------------------------- #
diff --git a/src/psyclone/domain/lfric/lfric_stencils.py b/src/psyclone/domain/lfric/lfric_stencils.py
index fd2283a21a..b45438f2e4 100644
--- a/src/psyclone/domain/lfric/lfric_stencils.py
+++ b/src/psyclone/domain/lfric/lfric_stencils.py
@@ -39,13 +39,17 @@
''' This module implements the stencil information and code generation
associated with a PSy-layer routine or Kernel stub in the LFRic API. '''
-from psyclone.configuration import Config
+from psyclone.domain.lfric import LFRicTypes
from psyclone.domain.lfric.lfric_collection import LFRicCollection
from psyclone.domain.lfric.lfric_constants import LFRicConstants
from psyclone.errors import GenerationError, InternalError
-from psyclone.f2pygen import (AssignGen, CommentGen, DeclGen,
- IfThenGen, TypeDeclGen, UseGen)
-from psyclone.psyir.symbols import ScalarType
+from psyclone.psyir.nodes import (
+ Assignment, Reference, Call, StructureReference, IfBlock, BinaryOperation,
+ Literal, DataNode)
+from psyclone.psyir.symbols import (
+ DataSymbol, UnsupportedFortranType, INTEGER_TYPE,
+ ArgumentInterface, UnresolvedType, ContainerSymbol,
+ ImportInterface, ArrayType, DataTypeSymbol)
class LFRicStencils(LFRicCollection):
@@ -71,7 +75,7 @@ def __init__(self, node):
self._unique_extent_args = []
extent_names = []
# pylint: disable=too-many-nested-blocks
- for call in self._calls:
+ for call in self.kernel_calls:
for arg in call.arguments.args:
if arg.stencil:
# Check for the existence of arg.extent here as in
@@ -92,7 +96,7 @@ def __init__(self, node):
# argument names are removed.
self._unique_direction_args = []
direction_names = []
- for call in self._calls:
+ for call in self.kernel_calls:
for idx, arg in enumerate(call.arguments.args):
if arg.stencil and arg.stencil.direction_arg:
if arg.stencil.direction_arg.is_literal():
@@ -111,14 +115,13 @@ def __init__(self, node):
# List of stencil args with an extent variable passed in. The same
# field name may occur more than once here from different kernels.
self._kern_args = []
- for call in self._calls:
+ for call in self.kernel_calls:
for arg in call.arguments.args:
if arg.stencil:
if not arg.stencil.extent:
self._kern_args.append(arg)
- @staticmethod
- def extent_value(arg):
+ def extent_value(self, arg) -> DataNode:
'''
Returns the content of the stencil extent which may be a literal
value (a number) or a variable name. This function simplifies this
@@ -128,12 +131,12 @@ def extent_value(arg):
:type arg: :py:class:`psyclone.dynamo0p3.DynKernelArgument`
:returns: the content of the stencil extent.
- :rtype: str
'''
- if arg.stencil.extent_arg.is_literal():
- return arg.stencil.extent_arg.text
- return arg.stencil.extent_arg.varname
+ extent_arg = arg.stencil.extent_arg
+ if extent_arg.is_literal():
+ return Literal(extent_arg.text, INTEGER_TYPE)
+ return Reference(self.symtab.lookup(extent_arg.varname))
@staticmethod
def stencil_unique_str(arg, context):
@@ -179,9 +182,8 @@ def map_name(self, arg):
:rtype: str
'''
- root_name = arg.name + "_stencil_map"
unique = LFRicStencils.stencil_unique_str(arg, "map")
- return self._symbol_table.find_or_create_tag(unique, root_name).name
+ return self.symtab.lookup_with_tag(unique).name
@staticmethod
def dofmap_symbol(symtab, arg):
@@ -201,13 +203,11 @@ def dofmap_symbol(symtab, arg):
'''
root_name = arg.name + "_stencil_dofmap"
unique = LFRicStencils.stencil_unique_str(arg, "dofmap")
- if arg.descriptor.stencil['type'] == "cross2d":
- num_dimensions = 4
- else:
- num_dimensions = 3
- return symtab.find_or_create_array(root_name, num_dimensions,
- ScalarType.Intrinsic.INTEGER,
- tag=unique)
+ # The dofmap symbol type depends if it's in a stub or an invoke, so
+ # don't commit to any type yet.
+ return symtab.find_or_create_tag(
+ unique, root_name=root_name, symbol_type=DataSymbol,
+ datatype=UnresolvedType())
@staticmethod
def dofmap_size_symbol(symtab, arg):
@@ -227,16 +227,14 @@ def dofmap_size_symbol(symtab, arg):
'''
root_name = arg.name + "_stencil_size"
unique = LFRicStencils.stencil_unique_str(arg, "size")
- if arg.descriptor.stencil['type'] == "cross2d":
- num_dimensions = 2
- else:
- num_dimensions = 1
- return symtab.find_or_create_array(root_name, num_dimensions,
- ScalarType.Intrinsic.INTEGER,
- tag=unique)
+ return symtab.find_or_create_tag(
+ unique, root_name=root_name, symbol_type=DataSymbol,
+ # We don't commit to a type because it is different on
+ # Invokes and Stubs
+ datatype=UnresolvedType())
@staticmethod
- def max_branch_length_name(symtab, arg):
+ def max_branch_length(symtab, arg) -> DataSymbol:
'''
Create a valid unique name for the maximum length of a stencil branch
(in cells) of a 2D stencil dofmap in the PSy layer. This is required
@@ -249,47 +247,14 @@ def max_branch_length_name(symtab, arg):
:param arg: the kernel argument with which the stencil is associated.
:type arg: :py:class:`psyclone.dynamo0p3.DynKernelArgument`
- :returns: a Fortran variable name for the max stencil branch length.
- :rtype: str
+ :returns: the symbol representing the max stencil branch length.
'''
root_name = arg.name + "_max_branch_length"
unique = LFRicStencils.stencil_unique_str(arg, "length")
- return symtab.find_or_create_integer_symbol(root_name, tag=unique).name
-
- def _unique_max_branch_length_vars(self):
- '''
- :returns: list of all the unique max stencil extent argument names in
- this kernel call for CROSS2D stencils.
- :rtype: list of str
-
- '''
- names = []
- for arg in self._kern_args:
- if arg.descriptor.stencil['type'] == "cross2d":
- names.append(arg.name + "_max_branch_length")
-
- return names
-
- def _declare_unique_max_branch_length_vars(self, parent):
- '''
- Declare all unique max branch length arguments as integers with intent
- 'in' and add the declaration as a child of the parent argument passed
- in.
-
- :param parent: the node in the f2pygen AST to which to add the
- declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
- '''
- api_config = Config.get().api_conf("lfric")
-
- if self._unique_max_branch_length_vars():
- parent.add(DeclGen(
- parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=self._unique_max_branch_length_vars(), intent="in"
- ))
+ return symtab.find_or_create_tag(
+ unique, root_name=root_name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
@staticmethod
def direction_name(symtab, arg):
@@ -309,7 +274,9 @@ def direction_name(symtab, arg):
'''
root_name = arg.name+"_direction"
unique = LFRicStencils.stencil_unique_str(arg, "direction")
- return symtab.find_or_create_integer_symbol(root_name, tag=unique).name
+ return symtab.find_or_create_tag(
+ unique, root_name=root_name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
@property
def _unique_extent_vars(self):
@@ -326,7 +293,7 @@ def _unique_extent_vars(self):
names = [arg.stencil.extent_arg.varname for arg in
self._unique_extent_args]
elif self._kernel:
- names = [self.dofmap_size_symbol(self._symbol_table, arg).name
+ names = [LFRicStencils.dofmap_size_symbol(self.symtab, arg).name
for arg in self._unique_extent_args]
else:
raise InternalError("LFRicStencils._unique_extent_vars: have "
@@ -334,41 +301,40 @@ def _unique_extent_vars(self):
"impossible.")
return names
- def _declare_unique_extent_vars(self, parent):
+ def _declare_unique_extent_vars(self):
'''
- Declare all unique extent arguments as integers with intent 'in' and
- add the declaration as a child of the parent argument passed
- in.
-
- :param parent: the node in the f2pygen AST to which to add the
- declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ Declare all unique extent arguments as integers with intent 'in'.
'''
- api_config = Config.get().api_conf("lfric")
-
if self._unique_extent_vars:
if self._kernel:
for arg in self._kern_args:
if arg.descriptor.stencil['type'] == "cross2d":
- parent.add(DeclGen(
- parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- dimension="4",
- entity_decls=self._unique_extent_vars, intent="in"
- ))
+ for var in self._unique_extent_vars:
+ symbol = self.symtab.lookup(var)
+ symbol.datatype = ArrayType(
+ LFRicTypes(
+ "LFRicIntegerScalarDataType")(),
+ [Literal("4", INTEGER_TYPE)])
+ symbol.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(symbol)
else:
- parent.add(DeclGen(
- parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=self._unique_extent_vars,
- intent="in"))
+ for var in self._unique_extent_vars:
+ symbol = self.symtab.lookup(var)
+ symbol.datatype = LFRicTypes(
+ "LFRicIntegerScalarDataType")()
+ symbol.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(symbol)
elif self._invoke:
- parent.add(DeclGen(
- parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=self._unique_extent_vars, intent="in"
- ))
+ for var in self._unique_extent_vars:
+ symbol = self.symtab.find_or_create(
+ var, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ symbol.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(symbol)
@property
def _unique_direction_vars(self):
@@ -383,27 +349,23 @@ def _unique_direction_vars(self):
if arg.stencil.direction_arg.varname:
names.append(arg.stencil.direction_arg.varname)
else:
- names.append(arg.name+"_direction")
+ names.append(self.direction_name(self.symtab, arg).name)
return names
- def _declare_unique_direction_vars(self, parent):
+ def _declare_unique_direction_vars(self):
'''
- Declare all unique direction arguments as integers with intent 'in'
- and add the declaration as a child of the parent argument
- passed in.
-
- :param parent: the node in the f2pygen AST to which to add the
- declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ Declare all unique direction arguments as integers with intent 'in'.
'''
- api_config = Config.get().api_conf("lfric")
+ for var in self._unique_direction_vars:
+ symbol = self.symtab.find_or_create(
+ var, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
- if self._unique_direction_vars:
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=self._unique_direction_vars,
- intent="in"))
+ if symbol not in self.symtab.argument_list:
+ symbol.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(symbol)
@property
def unique_alg_vars(self):
@@ -415,91 +377,95 @@ def unique_alg_vars(self):
'''
return self._unique_extent_vars + self._unique_direction_vars
- def _invoke_declarations(self, parent):
+ def invoke_declarations(self):
'''
Declares all stencil maps, extent and direction arguments passed into
the PSy layer.
- :param parent: node in the f2pygen AST to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
- self._declare_unique_extent_vars(parent)
- self._declare_unique_direction_vars(parent)
- self._declare_maps_invoke(parent)
+ super().invoke_declarations()
+ self._declare_unique_extent_vars()
+ self._declare_unique_direction_vars()
+ self._declare_maps_invoke()
- def _stub_declarations(self, parent):
+ def stub_declarations(self):
'''
Declares all stencil-related quanitites for a Kernel stub.
- :param parent: node in the f2pygen AST to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
- self._declare_unique_extent_vars(parent)
- self._declare_unique_direction_vars(parent)
- self._declare_unique_max_branch_length_vars(parent)
- self._declare_maps_stub(parent)
+ super().stub_declarations()
+ self._declare_unique_extent_vars()
+ self._declare_unique_direction_vars()
+ self._declare_maps_stub()
- def initialise(self, parent):
+ def initialise(self, cursor: int) -> int:
'''
Adds in the code to initialise stencil dofmaps to the PSy layer.
- :param parent: the node in the f2pygen AST to which to add the
- initialisations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
:raises GenerationError: if an unsupported stencil type is encountered.
'''
if not self._kern_args:
- return
+ return cursor
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " Initialise stencil dofmaps"))
- parent.add(CommentGen(parent, ""))
- api_config = Config.get().api_conf("lfric")
stencil_map_names = []
const = LFRicConstants()
+ init_cursor = cursor
for arg in self._kern_args:
map_name = self.map_name(arg)
if map_name not in stencil_map_names:
# Only initialise maps once.
stencil_map_names.append(map_name)
stencil_type = arg.descriptor.stencil['type']
- symtab = self._symbol_table
+ symtab = self.symtab
if stencil_type == "xory1d":
direction_name = arg.stencil.direction_arg.varname
for direction in ["x", "y"]:
- if_then = IfThenGen(parent, direction_name +
- " .eq. " + direction +
- "_direction")
- if_then.add(
- AssignGen(
- if_then, pointer=True, lhs=map_name,
- rhs=arg.proxy_name_indexed +
- "%vspace%get_stencil_dofmap("
- "STENCIL_1D" + direction.upper() +
- ","+self.extent_value(arg)+")"))
- parent.add(if_then)
+ condition = BinaryOperation.create(
+ BinaryOperation.Operator.EQ,
+ Reference(symtab.lookup(direction_name)),
+ Reference(symtab.lookup(direction + "_direction")))
+ lhs = Reference(symtab.lookup(map_name))
+ rhs = arg.generate_method_call("get_stencil_dofmap")
+ rhs.addchild(Reference(
+ symtab.lookup("STENCIL_1D" + direction.upper())))
+ rhs.addchild(self.extent_value(arg))
+ stmt = Assignment.create(lhs=lhs, rhs=rhs,
+ is_pointer=True)
+ ifblock = IfBlock.create(condition, [stmt])
+ self._invoke.schedule.addchild(ifblock, cursor)
+ cursor += 1
+
elif stencil_type == "cross2d":
- parent.add(
- AssignGen(parent, pointer=True, lhs=map_name,
- rhs=arg.proxy_name_indexed +
- "%vspace%get_stencil_2D_dofmap(" +
- "STENCIL_2D_CROSS" + "," +
- self.extent_value(arg) + ")"))
+ lhs = Reference(symtab.lookup(map_name))
+ rhs = arg.generate_method_call("get_stencil_2D_dofmap")
+ rhs.addchild(Reference(
+ symtab.lookup("STENCIL_2D_CROSS")))
+ rhs.addchild(self.extent_value(arg))
+ stmt = Assignment.create(lhs=lhs, rhs=rhs,
+ is_pointer=True)
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
+
# Max branch length in the CROSS2D stencil is used when
# defining the stencil_dofmap dimensions at declaration of
# the dummy argument in the kernel. This value is 1
# greater than the stencil extent as the central cell
# is included as part of the stencil_dofmap.
- parent.add(
- AssignGen(parent,
- lhs=self.max_branch_length_name(symtab,
- arg),
- rhs=self.extent_value(arg) + " + 1_" +
- api_config.default_kind["integer"]))
+ stmt = Assignment.create(
+ lhs=Reference(
+ self.max_branch_length(symtab, arg)),
+ rhs=BinaryOperation.create(
+ BinaryOperation.Operator.ADD,
+ self.extent_value(arg),
+ Literal("1", INTEGER_TYPE))
+ )
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
else:
try:
stencil_name = const.STENCIL_MAPPING[stencil_type]
@@ -509,90 +475,136 @@ def initialise(self, parent):
f"'{arg.descriptor.stencil['type']}' supplied. "
f"Supported mappings are "
f"{str(const.STENCIL_MAPPING)}") from err
- parent.add(
- AssignGen(parent, pointer=True, lhs=map_name,
- rhs=arg.proxy_name_indexed +
- "%vspace%get_stencil_dofmap(" +
- stencil_name + "," +
- self.extent_value(arg) + ")"))
- parent.add(AssignGen(parent, pointer=True,
- lhs=self.dofmap_symbol(symtab, arg).name,
- rhs=map_name + "%get_whole_dofmap()"))
-
- # Add declaration and look-up of stencil size
- dofmap_size_name = self.dofmap_size_symbol(symtab, arg).name
- parent.add(AssignGen(parent, pointer=True,
- lhs=dofmap_size_name,
- rhs=map_name + "%get_stencil_sizes()"))
-
- def _declare_maps_invoke(self, parent):
+ rhs = arg.generate_method_call("get_stencil_dofmap")
+ rhs.addchild(Reference(symtab.lookup(stencil_name)))
+ rhs.addchild(self.extent_value(arg))
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(symtab.lookup(map_name)),
+ rhs=rhs,
+ is_pointer=True),
+ cursor)
+ cursor += 1
+
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(self.dofmap_symbol(symtab, arg)),
+ rhs=Call.create(
+ StructureReference.create(
+ symtab.lookup(map_name),
+ ["get_whole_dofmap"])),
+ is_pointer=True),
+ cursor)
+ cursor += 1
+
+ # Add look-up of stencil size
+ size_symbol = self.dofmap_size_symbol(self.symtab, arg)
+ if arg.descriptor.stencil['type'] == "cross2d":
+ num_dimensions = 2
+ else:
+ num_dimensions = 1
+ dim_string = (":," * num_dimensions)[:-1]
+ size_symbol.datatype = UnsupportedFortranType(
+ f"integer(kind=i_def), pointer, "
+ f"dimension({dim_string}) :: {size_symbol.name}"
+ f" => null()")
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(size_symbol),
+ rhs=Call.create(
+ StructureReference.create(
+ symtab.lookup(map_name),
+ ["get_stencil_sizes"])),
+ is_pointer=True),
+ cursor)
+ cursor += 1
+ if cursor > init_cursor:
+ self._invoke.schedule[init_cursor].preceding_comment = (
+ "Initialise stencil dofmaps")
+ return cursor
+
+ def _declare_maps_invoke(self):
'''
Declare all stencil maps in the PSy layer.
- :param parent: the node in the f2pygen AST to which to add
- declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
:raises GenerationError: if an unsupported stencil type is encountered.
'''
- api_config = Config.get().api_conf("lfric")
-
if not self._kern_args:
return
- symtab = self._symbol_table
+ symtab = self.symtab
stencil_map_names = []
const = LFRicConstants()
for arg in self._kern_args:
- map_name = self.map_name(arg)
+ unique_tag = LFRicStencils.stencil_unique_str(arg, "map")
- if map_name in stencil_map_names:
+ if unique_tag in stencil_map_names:
continue
+ stencil_map_names.append(unique_tag)
- stencil_map_names.append(map_name)
+ symbol = self.symtab.new_symbol(
+ root_name=f"{arg.name}_stencil_map", tag=unique_tag)
+ name = symbol.name
stencil_type = arg.descriptor.stencil['type']
if stencil_type == "cross2d":
- smap_type = const.STENCIL_TYPE_MAP["stencil_2D_dofmap"]["type"]
- smap_mod = const.STENCIL_TYPE_MAP[
- "stencil_2D_dofmap"]["module"]
- parent.add(UseGen(parent, name=smap_mod, only=True,
- funcnames=[smap_type, "STENCIL_2D_CROSS"]))
- parent.add(TypeDeclGen(parent, pointer=True,
- datatype=smap_type,
- entity_decls=[map_name +
- " => null()"]))
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- pointer=True,
- entity_decls=[self.dofmap_symbol(symtab,
- arg).name +
- "(:,:,:,:) => null()"]))
- dofmap_size_name = self.dofmap_size_symbol(symtab, arg).name
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- pointer=True,
- entity_decls=[f"{dofmap_size_name}(:,:) "
- f"=> null()"]))
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=[self.max_branch_length_name(
- symtab, arg)]))
+ smap_mod = self.symtab.find_or_create(
+ const.STENCIL_TYPE_MAP["stencil_2D_dofmap"]["module"],
+ symbol_type=ContainerSymbol)
+ smap_type = self.symtab.find_or_create(
+ const.STENCIL_TYPE_MAP["stencil_2D_dofmap"]["type"],
+ symbol_type=DataTypeSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(smap_mod))
+ self.symtab.find_or_create(
+ "STENCIL_2D_CROSS",
+ symbol_type=DataSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(smap_mod))
+
+ dtype = UnsupportedFortranType(
+ f"type({smap_type.name}), pointer :: {name} => null()")
+ symbol.specialise(subclass=DataSymbol, datatype=dtype)
+
+ dofmap_symbol = self.dofmap_symbol(symtab, arg)
+ dofmap_symbol.datatype = UnsupportedFortranType(
+ f"integer(kind=i_def), pointer, dimension(:,:,:,:) "
+ f":: {dofmap_symbol.name} => null()")
else:
- smap_type = const.STENCIL_TYPE_MAP["stencil_dofmap"]["type"]
- smap_mod = const.STENCIL_TYPE_MAP["stencil_dofmap"]["module"]
- parent.add(UseGen(parent, name=smap_mod,
- only=True, funcnames=[smap_type]))
+ smap_mod = self.symtab.find_or_create(
+ const.STENCIL_TYPE_MAP["stencil_dofmap"]["module"],
+ symbol_type=ContainerSymbol)
+ smap_type = self.symtab.find_or_create(
+ const.STENCIL_TYPE_MAP["stencil_dofmap"]["type"],
+ symbol_type=DataTypeSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(smap_mod))
if stencil_type == 'xory1d':
- drct_mod = const.STENCIL_TYPE_MAP["direction"]["module"]
- parent.add(UseGen(parent, name=drct_mod,
- only=True, funcnames=["x_direction",
- "y_direction"]))
- parent.add(UseGen(parent, name=smap_mod,
- only=True, funcnames=["STENCIL_1DX",
- "STENCIL_1DY"]))
+ drct_mod = self.symtab.find_or_create(
+ const.STENCIL_TYPE_MAP["direction"]["module"],
+ symbol_type=ContainerSymbol)
+ self.symtab.find_or_create(
+ "x_direction",
+ symbol_type=DataSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(drct_mod))
+ self.symtab.find_or_create(
+ "y_direction",
+ symbol_type=DataSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(drct_mod))
+ self.symtab.find_or_create(
+ "STENCIL_1DX",
+ symbol_type=DataSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(smap_mod))
+ self.symtab.find_or_create(
+ "STENCIL_1DY",
+ symbol_type=DataSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(smap_mod))
else:
try:
stencil_name = const.STENCIL_MAPPING[stencil_type]
@@ -602,54 +614,55 @@ def _declare_maps_invoke(self, parent):
f"'{arg.descriptor.stencil['type']}' supplied. "
f"Supported mappings are "
f"{const.STENCIL_MAPPING}") from err
- parent.add(UseGen(parent, name=smap_mod,
- only=True, funcnames=[stencil_name]))
-
- parent.add(TypeDeclGen(parent, pointer=True,
- datatype=smap_type,
- entity_decls=[map_name+" => null()"]))
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- pointer=True,
- entity_decls=[self.dofmap_symbol(symtab,
- arg).name +
- "(:,:,:) => null()"]))
- dofmap_size_name = self.dofmap_size_symbol(symtab, arg).name
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- pointer=True,
- entity_decls=[f"{dofmap_size_name}(:) "
- f"=> null()"]))
+ self.symtab.find_or_create(
+ stencil_name,
+ symbol_type=DataSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(smap_mod))
- def _declare_maps_stub(self, parent):
- '''
- Add declarations for all stencil maps to a kernel stub.
+ dtype = UnsupportedFortranType(
+ f"type({smap_type.name}), pointer :: {name} => null()")
+ symbol.specialise(subclass=DataSymbol, datatype=dtype)
- :param parent: the node in the f2pygen AST representing the kernel
- stub routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ dofmap_symbol = self.dofmap_symbol(symtab, arg)
+ dofmap_symbol.datatype = UnsupportedFortranType(
+ f"integer(kind=i_def), pointer, dimension(:,:,:) "
+ f":: {dofmap_symbol.name} => null()")
+ def _declare_maps_stub(self):
'''
- api_config = Config.get().api_conf("lfric")
+ Add declarations for all stencil maps to a kernel stub. (Note that
+ the order of arguments will be redefined later on by ArgOrdering)
- symtab = self._symbol_table
+ '''
+ symtab = self.symtab
for arg in self._kern_args:
+ symbol = self.dofmap_symbol(symtab, arg)
if arg.descriptor.stencil['type'] == "cross2d":
- parent.add(DeclGen(
- parent, datatype="integer",
- kind=api_config.default_kind["integer"], intent="in",
- dimension=",".join([arg.function_space.ndf_name,
- self.max_branch_length_name(
- symtab, arg), "4"]),
- entity_decls=[self.dofmap_symbol(symtab, arg).name]))
+ max_length = self.max_branch_length(symtab, arg)
+ if max_length not in symtab.argument_list:
+ max_length.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ symtab.append_argument(max_length)
+ symbol.datatype = ArrayType(
+ LFRicTypes("LFRicIntegerScalarDataType")(),
+ [Reference(symtab.lookup(arg.function_space.ndf_name)),
+ Reference(max_length),
+ Literal("4", INTEGER_TYPE)])
else:
- dofmap_size_name = self.dofmap_size_symbol(symtab, arg).name
- parent.add(DeclGen(
- parent, datatype="integer",
- kind=api_config.default_kind["integer"], intent="in",
- dimension=",".join([arg.function_space.ndf_name,
- dofmap_size_name]),
- entity_decls=[self.dofmap_symbol(symtab, arg).name]))
+ size_symbol = self.dofmap_size_symbol(self.symtab, arg)
+ size_symbol.datatype = \
+ LFRicTypes("LFRicIntegerScalarDataType")()
+ size_symbol.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ symtab.append_argument(size_symbol)
+ symbol.datatype = ArrayType(
+ LFRicTypes("LFRicIntegerScalarDataType")(),
+ [Reference(symtab.lookup(arg.function_space.ndf_name)),
+ Reference(size_symbol)])
+ symbol.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ symtab.append_argument(symbol)
# ---------- Documentation utils -------------------------------------------- #
diff --git a/src/psyclone/dynamo0p3.py b/src/psyclone/dynamo0p3.py
index d94e1cb897..15c563462f 100644
--- a/src/psyclone/dynamo0p3.py
+++ b/src/psyclone/dynamo0p3.py
@@ -59,9 +59,6 @@
LFRicTypes, LFRicLoop)
from psyclone.domain.lfric.lfric_invoke_schedule import LFRicInvokeSchedule
from psyclone.errors import GenerationError, InternalError, FieldNotFoundError
-from psyclone.f2pygen import (AllocateGen, AssignGen, CallGen, CommentGen,
- DeallocateGen, DeclGen, DoGen, PSyIRGen,
- TypeDeclGen, UseGen)
from psyclone.parse.kernel import getkerneldescriptors
from psyclone.parse.utils import ParseError
from psyclone.psyGen import (InvokeSchedule, Arguments,
@@ -69,13 +66,13 @@
DataAccess)
from psyclone.psyir.frontend.fortran import FortranReader
from psyclone.psyir.nodes import (
- Assignment, ACCEnterDataDirective, ArrayOfStructuresReference,
- Reference, Schedule, StructureReference, Literal, IfBlock, Call,
- BinaryOperation, IntrinsicCall, Container)
-from psyclone.psyir.symbols import (INTEGER_TYPE, DataSymbol, ScalarType,
- UnresolvedType, DataTypeSymbol,
- ContainerSymbol, ImportInterface,
- ArrayType, UnsupportedFortranType)
+ Reference, ACCEnterDataDirective, ArrayOfStructuresReference,
+ StructureReference, Literal, IfBlock, Call, BinaryOperation, IntrinsicCall,
+ Assignment, ArrayReference, Loop, Container, Schedule, Node)
+from psyclone.psyir.symbols import (
+ INTEGER_TYPE, DataSymbol, ScalarType, UnresolvedType, DataTypeSymbol,
+ ContainerSymbol, ImportInterface, StructureType,
+ ArrayType, UnsupportedFortranType, ArgumentInterface)
# pylint: disable=too-many-lines
@@ -389,7 +386,7 @@ def __init__(self, node):
# kernel stub.
self._properties = []
- for call in self._calls:
+ for call in self.kernel_calls:
if call.mesh:
self._properties += [prop for prop in call.mesh.properties
if prop not in self._properties]
@@ -407,15 +404,24 @@ def __init__(self, node):
# Store properties in symbol table
for prop in self._properties:
name_lower = prop.name.lower()
- if prop.name in ["NCELL_2D", "NCELL_2D_NO_HALOS"]:
- # This is an integer:
- self._symbol_table.find_or_create_integer_symbol(
- name_lower, tag=name_lower)
- else:
- # E.g.: adjacent_face
- self._symbol_table.find_or_create_array(
- name_lower, 2, ScalarType.Intrinsic.INTEGER,
+ if prop == MeshProperty.ADJACENT_FACE:
+ # If it's adjacent face, make it a pointer array
+ self.symtab.find_or_create(
+ name_lower, symbol_type=DataSymbol,
+ datatype=UnsupportedFortranType(
+ "integer(kind=i_def), pointer :: adjacent_face(:,:) "
+ "=> null()",
+ partial_datatype=ArrayType(
+ LFRicTypes("LFRicIntegerScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2)
+ ),
tag=name_lower)
+ else:
+ # Everything else is an integer
+ self.symtab.find_or_create(
+ name_lower, tag=name_lower,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
def kern_args(self, stub=False, var_accesses=None,
kern_call_arg_list=None):
@@ -468,9 +474,13 @@ def kern_args(self, stub=False, var_accesses=None,
append_integer_reference("nfaces_re_h")
name = sym.name
else:
- name = self._symbol_table.\
- find_or_create_integer_symbol(
- "nfaces_re_h", tag="nfaces_re_h").name
+ lisdt = LFRicTypes("LFRicIntegerScalarDataType")()
+ name = self.symtab.\
+ find_or_create(
+ "nfaces_re_h", tag="nfaces_re_h",
+ symbol_type=DataSymbol,
+ datatype=lisdt
+ ).name
arg_list.append(name)
if var_accesses is not None:
var_accesses.add_access(Signature(name),
@@ -484,8 +494,7 @@ def kern_args(self, stub=False, var_accesses=None,
kern_call_arg_list.cell_ref_name(var_accesses)
adj_face_sym = kern_call_arg_list. \
append_array_reference(adj_face,
- [":", cell_ref],
- ScalarType.Intrinsic.INTEGER)
+ [":", cell_ref])
# Update the name in case there was a clash
adj_face = adj_face_sym.name
if var_accesses:
@@ -494,12 +503,12 @@ def kern_args(self, stub=False, var_accesses=None,
[":", cell_ref])
if not stub:
- adj_face = self._symbol_table.find_or_create_tag(
+ adj_face = self.symtab.find_or_create_tag(
"adjacent_face").name
cell_name = "cell"
if self._kernel.is_coloured():
colour_name = "colour"
- cmap_name = self._symbol_table.find_or_create_tag(
+ cmap_name = self.symtab.find_or_create_tag(
"cmap", root_name="cmap").name
adj_face += (f"(:,{cmap_name}({colour_name},"
f"{cell_name}))")
@@ -523,112 +532,77 @@ def kern_args(self, stub=False, var_accesses=None,
return arg_list
- def _invoke_declarations(self, parent):
+ def invoke_declarations(self):
'''
Creates the necessary declarations for variables needed in order to
provide mesh properties to a kernel call.
- :param parent: node in the f2pygen AST to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
- :raises InternalError: if this class has been instantiated for a \
- kernel instead of an invoke.
:raises InternalError: if an unsupported mesh property is found.
'''
- api_config = Config.get().api_conf("lfric")
-
- if not self._invoke:
- raise InternalError(
- "_invoke_declarations() cannot be called because "
- "LFRicMeshProperties has been instantiated for a kernel and "
- "not an invoke.")
-
+ super().invoke_declarations()
for prop in self._properties:
# The DynMeshes class will have created a mesh object so we
# don't need to do that here.
if prop == MeshProperty.ADJACENT_FACE:
- adj_face = self._symbol_table.find_or_create_tag(
- "adjacent_face").name + "(:,:) => null()"
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- pointer=True, entity_decls=[adj_face]))
+ self.symtab.find_or_create_tag("adjacent_face")
elif prop == MeshProperty.NCELL_2D_NO_HALOS:
- name = self._symbol_table.find_or_create_integer_symbol(
+ self.symtab.find_or_create(
"ncell_2d_no_halos",
- tag="ncell_2d_no_halos").name
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=[name]))
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")(),
+ tag="ncell_2d_no_halos")
elif prop == MeshProperty.NCELL_2D:
- name = self._symbol_table.find_or_create_integer_symbol(
- "ncell_2d", tag="ncell_2d").name
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=[name]))
+ self.symtab.find_or_create(
+ "ncell_2d", tag="ncell_2d",
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ )
else:
raise InternalError(
f"Found unsupported mesh property '{prop}' when generating"
f" invoke declarations. Only members of the MeshProperty "
f"Enum are permitted ({list(MeshProperty)}).")
- def _stub_declarations(self, parent):
+ def stub_declarations(self):
'''
Creates the necessary declarations for the variables needed in order
to provide properties of the mesh in a kernel stub.
+ Note that argument order is redefined later by ArgOrdering.
- :param parent: node in the f2pygen AST to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
- :raises InternalError: if the class has been instantiated for an \
- invoke and not a kernel.
:raises InternalError: if an unsupported mesh property is encountered.
'''
- api_config = Config.get().api_conf("lfric")
-
- if not self._kernel:
- raise InternalError(
- "_stub_declarations() cannot be called because "
- "LFRicMeshProperties has been instantiated for an invoke and "
- "not a kernel.")
-
+ super().stub_declarations()
for prop in self._properties:
if prop == MeshProperty.ADJACENT_FACE:
- adj_face = self._symbol_table.find_or_create_array(
- "adjacent_face", 2, ScalarType.Intrinsic.INTEGER,
- tag="adjacent_face").name
- # 'nfaces_re_h' will have been declared by the
- # DynReferenceElement class.
- dimension = self._symbol_table.\
- find_or_create_integer_symbol("nfaces_re_h",
- tag="nfaces_re_h").name
- parent.add(
- DeclGen(
- parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- dimension=dimension,
- intent="in", entity_decls=[adj_face]))
+ adj_face = self.symtab.lookup("adjacent_face")
+ dimension = self.symtab.lookup("nfaces_re_h")
+ adj_face.datatype = ArrayType(
+ LFRicTypes("LFRicIntegerScalarDataType")(),
+ [Reference(dimension)])
+ adj_face.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(adj_face)
elif prop == MeshProperty.NCELL_2D:
- ncell_2d = self._symbol_table.find_or_create_integer_symbol(
- "ncell_2d", tag="ncell_2d")
- parent.add(
- DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", entity_decls=[ncell_2d.name]))
+ ncell_2d = self.symtab.lookup("ncell_2d")
+ ncell_2d.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(ncell_2d)
else:
raise InternalError(
f"Found unsupported mesh property '{prop}' when generating"
f" declarations for kernel stub. Only members of the "
f"MeshProperty Enum are permitted ({list(MeshProperty)})")
- def initialise(self, parent):
+ def initialise(self, cursor: int) -> int:
'''
- Creates the f2pygen nodes for the initialisation of properties of
+ Creates the PSyIR nodes for the initialisation of properties of
the mesh.
- :param parent: node in the f2pygen tree to which to add statements.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
:raises InternalError: if an unsupported mesh property is encountered.
@@ -638,7 +612,7 @@ def initialise(self, parent):
# it now, rather than when this class was first constructed.
need_colour_limits = False
need_colour_halo_limits = False
- for call in self._calls:
+ for call in self.kernel_calls:
if call.is_coloured() and not call.is_intergrid:
loop = call.parent.parent
# Record whether or not this coloured loop accesses the halo.
@@ -652,48 +626,75 @@ def initialise(self, parent):
# If no mesh properties are required and there's no colouring
# (which requires a mesh object to lookup loop bounds) then we
# need do nothing.
- return
+ return cursor
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " Initialise mesh properties"))
- parent.add(CommentGen(parent, ""))
-
- mesh = self._symbol_table.find_or_create_tag("mesh").name
+ mesh = self.symtab.find_or_create_tag("mesh")
+ init_cursor = cursor
for prop in self._properties:
if prop == MeshProperty.ADJACENT_FACE:
- adj_face = self._symbol_table.find_or_create_tag(
- "adjacent_face").name
- parent.add(AssignGen(parent, pointer=True, lhs=adj_face,
- rhs=mesh+"%get_adjacent_face()"))
+ adj_face = self.symtab.find_or_create_tag(
+ "adjacent_face")
+ assignment = Assignment.create(
+ lhs=Reference(adj_face),
+ rhs=Call.create(StructureReference.create(
+ mesh, ["get_adjacent_face"])),
+ is_pointer=True)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
elif prop == MeshProperty.NCELL_2D_NO_HALOS:
- name = self._symbol_table.find_or_create_integer_symbol(
- "ncell_2d_no_halos", tag="ncell_2d_no_halos").name
- parent.add(AssignGen(parent, lhs=name,
- rhs=mesh+"%get_last_edge_cell()"))
+ symbol = self.symtab.find_or_create(
+ "ncell_2d_no_halos", tag="ncell_2d_no_halos",
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ )
+ assignment = Assignment.create(
+ lhs=Reference(symbol),
+ rhs=Call.create(StructureReference.create(
+ mesh, ["get_last_edge_cell"])),)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
elif prop == MeshProperty.NCELL_2D:
- name = self._symbol_table.find_or_create_integer_symbol(
- "ncell_2d", tag="ncell_2d").name
- parent.add(AssignGen(parent, lhs=name,
- rhs=mesh+"%get_ncells_2d()"))
+ symbol = self.symtab.find_or_create(
+ "ncell_2d", tag="ncell_2d",
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ )
+ assignment = Assignment.create(
+ lhs=Reference(symbol),
+ rhs=Call.create(StructureReference.create(
+ mesh, ["get_ncells_2d"])),)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
else:
raise InternalError(
f"Found unsupported mesh property '{str(prop)}' when "
f"generating initialisation code. Only members of the "
f"MeshProperty Enum are permitted ({list(MeshProperty)})")
+ self._invoke.schedule[init_cursor].append_preceding_comment(
+ "Initialise mesh properties")
if need_colour_halo_limits:
- lhs = self._symbol_table.find_or_create_tag(
- "last_halo_cell_all_colours").name
- rhs = f"{mesh}%get_last_halo_cell_all_colours()"
- parent.add(AssignGen(parent, lhs=lhs, rhs=rhs))
+ lhs = self.symtab.find_or_create_tag(
+ "last_halo_cell_all_colours")
+ assignment = Assignment.create(
+ lhs=Reference(lhs),
+ rhs=Call.create(StructureReference.create(
+ mesh, ["get_last_halo_cell_all_colours"])))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
if need_colour_limits:
- lhs = self._symbol_table.find_or_create_tag(
- "last_edge_cell_all_colours").name
- rhs = f"{mesh}%get_last_edge_cell_all_colours()"
- parent.add(AssignGen(parent, lhs=lhs, rhs=rhs))
+ lhs = self.symtab.find_or_create_tag(
+ "last_edge_cell_all_colours")
+ assignment = Assignment.create(
+ lhs=Reference(lhs),
+ rhs=Call.create(StructureReference.create(
+ mesh, ["get_last_edge_cell_all_colours"])))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
+ return cursor
class DynReferenceElement(LFRicCollection):
@@ -722,7 +723,7 @@ def __init__(self, node):
self._properties = []
self._nfaces_h_required = False
- for call in self._calls:
+ for call in self.kernel_calls:
if call.reference_element:
self._properties.extend(call.reference_element.properties)
if call.mesh and call.mesh.properties:
@@ -737,11 +738,10 @@ def __init__(self, node):
if self._properties:
self._properties = list(OrderedDict.fromkeys(self._properties))
- symtab = self._symbol_table
+ symtab = self.symtab
- # Create and store a name for the reference element object
- self._ref_elem_name = \
- symtab.find_or_create_tag("reference_element").name
+ # Create and store symbol for the reference element object
+ self._ref_elem_symbol = None
# Initialise names for the properties of the reference element object:
# Number of horizontal/vertical/all faces,
@@ -769,22 +769,25 @@ def __init__(self, node):
RefElementMetaData.Property.OUTWARD_NORMALS_TO_HORIZONTAL_FACES
in self._properties or
self._nfaces_h_required):
- self._nfaces_h_symbol = symtab.find_or_create_integer_symbol(
- "nfaces_re_h", tag="nfaces_re_h")
+ self._nfaces_h_symbol = symtab.find_or_create(
+ "nfaces_re_h", tag="nfaces_re_h", symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
# Provide no. of vertical faces if required
if (RefElementMetaData.Property.NORMALS_TO_VERTICAL_FACES
in self._properties or
RefElementMetaData.Property.OUTWARD_NORMALS_TO_VERTICAL_FACES
in self._properties):
- self._nfaces_v_symbol = symtab.find_or_create_integer_symbol(
- "nfaces_re_v", tag="nfaces_re_v")
+ self._nfaces_v_symbol = symtab.find_or_create(
+ "nfaces_re_v", tag="nfaces_re_v", symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
# Provide no. of all faces if required
if (RefElementMetaData.Property.NORMALS_TO_FACES
in self._properties or
RefElementMetaData.Property.OUTWARD_NORMALS_TO_FACES
in self._properties):
- self._nfaces_symbol = symtab.find_or_create_integer_symbol(
- "nfaces_re", tag="nfaces_re")
+ self._nfaces_symbol = symtab.find_or_create(
+ "nfaces_re", tag="nfaces_re", symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
# Now the arrays themselves, in the order specified in the
# kernel metadata (in the case of a kernel stub)
@@ -793,9 +796,12 @@ def __init__(self, node):
if prop == RefElementMetaData.Property.NORMALS_TO_HORIZONTAL_FACES:
name = "normals_to_horiz_faces"
self._horiz_face_normals_symbol = \
- symtab.find_or_create_array(name, 2,
- ScalarType.Intrinsic.REAL,
- tag=name)
+ symtab.find_or_create(
+ name, symbol_type=DataSymbol,
+ datatype=ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2),
+ tag=name)
if self._horiz_face_normals_symbol not in self._arg_properties:
self._arg_properties[self._horiz_face_normals_symbol] = \
self._nfaces_h_symbol
@@ -804,9 +810,12 @@ def __init__(self, node):
OUTWARD_NORMALS_TO_HORIZONTAL_FACES):
name = "out_normals_to_horiz_faces"
self._horiz_face_out_normals_symbol = \
- symtab.find_or_create_array(name, 2,
- ScalarType.Intrinsic.REAL,
- tag=name)
+ symtab.find_or_create(
+ name, symbol_type=DataSymbol,
+ datatype=ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2),
+ tag=name)
if self._horiz_face_out_normals_symbol not in \
self._arg_properties:
self._arg_properties[self._horiz_face_out_normals_symbol] \
@@ -815,9 +824,12 @@ def __init__(self, node):
NORMALS_TO_VERTICAL_FACES):
name = "normals_to_vert_faces"
self._vert_face_normals_symbol = \
- symtab.find_or_create_array(name, 2,
- ScalarType.Intrinsic.REAL,
- tag=name)
+ symtab.find_or_create(
+ name, symbol_type=DataSymbol,
+ datatype=ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2),
+ tag=name)
if self._vert_face_normals_symbol not in self._arg_properties:
self._arg_properties[self._vert_face_normals_symbol] = \
self._nfaces_v_symbol
@@ -826,9 +838,12 @@ def __init__(self, node):
OUTWARD_NORMALS_TO_VERTICAL_FACES):
name = "out_normals_to_vert_faces"
self._vert_face_out_normals_symbol = \
- symtab.find_or_create_array(name, 2,
- ScalarType.Intrinsic.REAL,
- tag=name)
+ symtab.find_or_create(
+ name, symbol_type=DataSymbol,
+ datatype=ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2),
+ tag=name)
if self._vert_face_out_normals_symbol not in \
self._arg_properties:
self._arg_properties[self._vert_face_out_normals_symbol] \
@@ -837,9 +852,12 @@ def __init__(self, node):
elif prop == RefElementMetaData.Property.NORMALS_TO_FACES:
name = "normals_to_faces"
self._face_normals_symbol = \
- symtab.find_or_create_array(name, 2,
- ScalarType.Intrinsic.REAL,
- tag=name)
+ symtab.find_or_create(
+ name, symbol_type=DataSymbol,
+ datatype=ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2),
+ tag=name)
if self._face_normals_symbol not in self._arg_properties:
self._arg_properties[self._face_normals_symbol] = \
self._nfaces_symbol
@@ -847,9 +865,12 @@ def __init__(self, node):
elif prop == RefElementMetaData.Property.OUTWARD_NORMALS_TO_FACES:
name = "out_normals_to_faces"
self._face_out_normals_symbol = \
- symtab.find_or_create_array(name, 2,
- ScalarType.Intrinsic.REAL,
- tag=name)
+ symtab.find_or_create(
+ name, symbol_type=DataSymbol,
+ datatype=ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2),
+ tag=name)
if self._face_out_normals_symbol not in \
self._arg_properties:
self._arg_properties[self._face_out_normals_symbol] = \
@@ -886,173 +907,181 @@ def kern_args_symbols(self):
nfaces = list(OrderedDict.fromkeys(argdict.values()))
return nfaces + list(argdict.keys())
- def _invoke_declarations(self, parent):
+ def invoke_declarations(self):
'''
Create the necessary declarations for the variables needed in order
to provide properties of the reference element in a Kernel call.
- :param parent: node in the f2pygen AST to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
- # Get the list of the required scalars
- if self._properties:
- # remove duplicates with an OrderedDict
- nface_vars = list(OrderedDict.fromkeys(
- self._arg_properties.values()))
- elif self._nfaces_h_required:
- # We only need the number of 'horizontal' faces
- nface_vars = [self._nfaces_h_symbol]
- else:
+ super().invoke_declarations()
+ if not self._properties and not self._nfaces_h_required:
# No reference-element properties required
return
- api_config = Config.get().api_conf("lfric")
const = LFRicConstants()
refelem_type = const.REFELEMENT_TYPE_MAP["refelement"]["type"]
refelem_mod = const.REFELEMENT_TYPE_MAP["refelement"]["module"]
- parent.add(UseGen(parent, name=refelem_mod, only=True,
- funcnames=[refelem_type]))
- parent.add(
- TypeDeclGen(parent, pointer=True, is_class=True,
- datatype=refelem_type,
- entity_decls=[self._ref_elem_name + " => null()"]))
-
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=[var.name for var in nface_vars]))
-
- if not self._properties:
- # We only need the number of horizontal faces so we're done
- return
-
- # Declare the necessary arrays
- array_decls = [f"{sym.name}(:,:)"
- for sym in self._arg_properties.keys()]
- my_kind = api_config.default_kind["real"]
- parent.add(DeclGen(parent, datatype="real", kind=my_kind,
- allocatable=True, entity_decls=array_decls))
- # Ensure the necessary kind parameter is imported.
- const_mod = const.UTILITIES_MOD_MAP["constants"]["module"]
- const_mod_uses = self._invoke.invokes.psy.infrastructure_modules[
- const_mod]
- const_mod_uses.add(my_kind)
-
- def _stub_declarations(self, parent):
+ mod = ContainerSymbol(refelem_mod)
+ self.symtab.add(mod)
+ self.symtab.add(
+ DataTypeSymbol(refelem_type, datatype=StructureType(),
+ interface=ImportInterface(mod)))
+ self._ref_elem_symbol = self.symtab.find_or_create_tag(
+ "reference_element")
+ self._ref_elem_symbol.specialise(
+ DataSymbol,
+ datatype=UnsupportedFortranType(
+ f"class({refelem_type}), pointer :: "
+ f"{self._ref_elem_symbol.name} => null()"))
+
+ def stub_declarations(self):
'''
Create the necessary declarations for the variables needed in order
to provide properties of the reference element in a Kernel stub.
-
- :param parent: node in the f2pygen AST to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ Note that argument order is redefined later by ArgOrdering.
'''
- api_config = Config.get().api_conf("lfric")
-
+ super().stub_declarations()
if not (self._properties or self._nfaces_h_required):
return
- # Declare the necessary scalars (duplicates are ignored by parent.add)
+ # Declare the necessary scalars (duplicates are ignored)
scalars = list(self._arg_properties.values())
- nfaces_h = self._symbol_table.find_or_create_integer_symbol(
- "nfaces_re_h", tag="nfaces_re_h")
+ nfaces_h = self.symtab.find_or_create(
+ "nfaces_re_h", tag="nfaces_re_h",
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ )
if self._nfaces_h_required and nfaces_h not in scalars:
scalars.append(nfaces_h)
for nface in scalars:
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", entity_decls=[nface.name]))
+ sym = self.symtab.find_or_create(
+ nface.name,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ )
+ sym.interface = ArgumentInterface(ArgumentInterface.Access.READ)
+ self.symtab.append_argument(sym)
# Declare the necessary arrays
for arr, sym in self._arg_properties.items():
- dimension = f"3,{sym.name}"
- parent.add(DeclGen(parent, datatype="real",
- kind=api_config.default_kind["real"],
- intent="in", dimension=dimension,
- entity_decls=[arr.name]))
+ arrsym = self.symtab.lookup(arr.name)
+ arrsym.datatype = ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [Literal("3", INTEGER_TYPE), Reference(sym)])
+ arrsym.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(arrsym)
- def initialise(self, parent):
+ def initialise(self, cursor):
'''
- Creates the f2pygen nodes representing the necessary initialisation
+ Creates the PSyIR nodes representing the necessary initialisation
code for properties of the reference element.
- :param parent: node in the f2pygen tree to which to add statements.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param int cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
+ :rtype: int
'''
if not (self._properties or self._nfaces_h_required):
- return
-
- parent.add(CommentGen(parent, ""))
- parent.add(
- CommentGen(parent,
- " Get the reference element and query its properties"))
- parent.add(CommentGen(parent, ""))
-
- mesh_obj_name = self._symbol_table.find_or_create_tag("mesh").name
- parent.add(AssignGen(parent, pointer=True, lhs=self._ref_elem_name,
- rhs=mesh_obj_name+"%get_reference_element()"))
+ return cursor
+
+ mesh_obj = self.symtab.find_or_create_tag("mesh")
+ ref_element = self._ref_elem_symbol
+ stmt = Assignment.create(
+ lhs=Reference(ref_element),
+ rhs=Call.create(
+ StructureReference.create(
+ mesh_obj, ["get_reference_element"])),
+ is_pointer=True)
+ stmt.preceding_comment = (
+ "Get the reference element and query its properties"
+ )
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
if self._nfaces_h_symbol:
- parent.add(
- AssignGen(parent, lhs=self._nfaces_h_symbol.name,
- rhs=self._ref_elem_name +
- "%get_number_horizontal_faces()"))
+ stmt = Assignment.create(
+ lhs=Reference(self._nfaces_h_symbol),
+ rhs=Call.create(
+ StructureReference.create(
+ ref_element, ["get_number_horizontal_faces"])))
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
if self._nfaces_v_symbol:
- parent.add(
- AssignGen(
- parent, lhs=self._nfaces_v_symbol.name,
- rhs=self._ref_elem_name + "%get_number_vertical_faces()"))
+ stmt = Assignment.create(
+ lhs=Reference(self._nfaces_v_symbol),
+ rhs=Call.create(
+ StructureReference.create(
+ ref_element, ["get_number_vertical_faces"])))
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
if self._nfaces_symbol:
- parent.add(
- AssignGen(
- parent, lhs=self._nfaces_symbol.name,
- rhs=self._ref_elem_name + "%get_number_faces()"))
+ stmt = Assignment.create(
+ lhs=Reference(self._nfaces_symbol),
+ rhs=Call.create(
+ StructureReference.create(
+ ref_element, ["get_number_faces"])))
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
if self._horiz_face_normals_symbol:
- parent.add(
- CallGen(parent,
- name=f"{self._ref_elem_name}%get_normals_to_"
- f"horizontal_faces("
- f"{self._horiz_face_normals_symbol.name})"))
+ stmt = Call.create(
+ StructureReference.create(
+ ref_element, ["get_normals_to_horizontal_faces"]))
+ stmt.addchild(Reference(self._horiz_face_normals_symbol))
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
if self._horiz_face_out_normals_symbol:
- parent.add(
- CallGen(
- parent,
- name=f"{self._ref_elem_name}%get_outward_normals_to_"
- f"horizontal_faces("
- f"{self._horiz_face_out_normals_symbol.name})"))
+ stmt = Call.create(
+ StructureReference.create(
+ ref_element,
+ ["get_outward_normals_to_horizontal_faces"]))
+ stmt.addchild(Reference(self._horiz_face_out_normals_symbol))
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
if self._vert_face_normals_symbol:
- parent.add(
- CallGen(parent,
- name=f"{self._ref_elem_name}%get_normals_to_vertical_"
- f"faces({self._vert_face_normals_symbol.name})"))
+ stmt = Call.create(
+ StructureReference.create(
+ ref_element,
+ ["get_normals_to_vertical_faces"]))
+ stmt.addchild(Reference(self._vert_face_normals_symbol))
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
if self._vert_face_out_normals_symbol:
- parent.add(
- CallGen(
- parent,
- name=f"{self._ref_elem_name}%get_outward_normals_to_"
- f"vertical_faces"
- f"({self._vert_face_out_normals_symbol.name})"))
+ stmt = Call.create(
+ StructureReference.create(
+ ref_element,
+ ["get_outward_normals_to_vertical_faces"]))
+ stmt.addchild(Reference(self._vert_face_out_normals_symbol))
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
if self._face_normals_symbol:
- parent.add(
- CallGen(parent,
- name=f"{self._ref_elem_name}%get_normals_to_faces"
- f"({self._face_normals_symbol.name})"))
+ stmt = Call.create(
+ StructureReference.create(
+ ref_element,
+ ["get_normals_to_faces"]))
+ stmt.addchild(Reference(self._face_normals_symbol))
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
if self._face_out_normals_symbol:
- parent.add(
- CallGen(
- parent,
- name=f"{self._ref_elem_name}%get_outward_normals_to_"
- f"faces({self._face_out_normals_symbol.name})"))
+ stmt = Call.create(
+ StructureReference.create(
+ ref_element,
+ ["get_outward_normals_to_faces"]))
+ stmt.addchild(Reference(self._face_out_normals_symbol))
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
+ return cursor
class DynFunctionSpaces(LFRicCollection):
@@ -1068,7 +1097,7 @@ def __init__(self, kern_or_invoke):
if self._invoke:
self._function_spaces = self._invoke.unique_fss()[:]
else:
- self._function_spaces = self._calls[0].arguments.unique_fss
+ self._function_spaces = self.kernel_calls[0].arguments.unique_fss
self._var_list = []
@@ -1094,47 +1123,39 @@ def __init__(self, kern_or_invoke):
function_space.field_on_space(self._kernel.arguments):
self._var_list.append(function_space.undf_name)
- def _stub_declarations(self, parent):
+ def stub_declarations(self):
'''
Add function-space-related declarations to a Kernel stub.
-
- :param parent: the node in the f2pygen AST representing the kernel \
- stub to which to add declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ Note that argument order is redefined later by ArgOrdering.
'''
- api_config = Config.get().api_conf("lfric")
-
- if self._var_list:
- # Declare ndf and undf for all function spaces
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", entity_decls=self._var_list))
+ super().stub_declarations()
+ for var in self._var_list:
+ arg = self.symtab.find_or_create(
+ var, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ arg.interface = ArgumentInterface(ArgumentInterface.Access.READ)
+ self.symtab.append_argument(arg)
- def _invoke_declarations(self, parent):
+ def invoke_declarations(self):
'''
Add function-space-related declarations to a PSy-layer routine.
- :param parent: the node in the f2pygen AST to which to add \
- declarations.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
- api_config = Config.get().api_conf("lfric")
-
- if self._var_list:
- # Declare ndf and undf for all function spaces
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=self._var_list))
+ super().invoke_declarations()
+ for var in self._var_list:
+ self.symtab.new_symbol(
+ var,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
- def initialise(self, parent):
+ def initialise(self, cursor: int) -> int:
'''
Create the code that initialises function-space quantities.
- :param parent: the node in the f2pygen AST representing the PSy-layer \
- routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
'''
# Loop over all unique function spaces used by the kernels in
@@ -1145,23 +1166,21 @@ def initialise(self, parent):
# will need ndf and undf. If we don't then we only need undf
# (for the upper bound of the loop over dofs) if we're not
# doing DM.
- if not (self._dofs_only and Config.get().distributed_memory):
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent,
- " Initialise number of DoFs for " +
- function_space.mangled_name))
- parent.add(CommentGen(parent, ""))
# Find argument proxy name used to dereference the argument
arg = self._invoke.arg_for_funcspace(function_space)
- name = arg.proxy_name_indexed
# Initialise ndf for this function space.
if not self._dofs_only:
ndf_name = function_space.ndf_name
- parent.add(AssignGen(parent, lhs=ndf_name,
- rhs=name +
- "%" + arg.ref_name(function_space) +
- "%get_ndf()"))
+ assignment = Assignment.create(
+ lhs=Reference(self.symtab.lookup(ndf_name)),
+ rhs=arg.generate_method_call(
+ "get_ndf", function_space=function_space))
+ assignment.preceding_comment = (
+ f"Initialise number of DoFs for "
+ f"{function_space.mangled_name}")
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
# If there is a field on this space then initialise undf
# for this function space. However, if the invoke contains
# only kernels that operate on dofs and distributed
@@ -1170,10 +1189,13 @@ def initialise(self, parent):
if not (self._dofs_only and Config.get().distributed_memory):
if self._invoke.field_on_space(function_space):
undf_name = function_space.undf_name
- parent.add(AssignGen(parent, lhs=undf_name,
- rhs=name + "%" +
- arg.ref_name(function_space) +
- "%get_undf()"))
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(self.symtab.lookup(undf_name)),
+ rhs=arg.generate_method_call("get_undf")),
+ cursor)
+ cursor += 1
+ return cursor
class DynProxies(LFRicCollection):
@@ -1227,7 +1249,7 @@ def __init__(self, node):
for idx in range(1, arg.vector_size+1):
# Make sure we're going to create a Symbol with a unique
# name.
- new_name = self._symbol_table.next_available_name(
+ new_name = self.symtab.next_available_name(
f"{arg.name}_{idx}_{suffix}")
tag = f"{arg.name}_{idx}:{suffix}"
# The data for a field lives in a rank-1 array.
@@ -1236,9 +1258,9 @@ def __init__(self, node):
# Make sure we're going to create a Symbol with a unique
# name (since this is hardwired into the
# UnsupportedFortranType).
- new_name = self._symbol_table.next_available_name(
- f"{arg.name}_{suffix}")
tag = f"{arg.name}:{suffix}"
+ new_name = self.symtab.next_available_name(
+ f"{arg.name}_{suffix}")
# The data for an operator lives in a rank-3 array.
rank = 1 if arg not in op_args else 3
self._add_symbol(new_name, tag, intrinsic_type, arg, rank)
@@ -1282,10 +1304,10 @@ def _add_symbol(self, name, tag, intrinsic_type, arg, rank):
f"dimension({index_str}) :: {name} => null()",
partial_datatype=array_type)
try:
- self._symbol_table.new_symbol(name,
- symbol_type=DataSymbol,
- datatype=dtype,
- tag=tag)
+ self.symtab.new_symbol(name,
+ symbol_type=DataSymbol,
+ datatype=dtype,
+ tag=tag)
except KeyError:
# The tag already exists and therefore we don't need to do
# anything. This can happen if the Symbol Table has already
@@ -1297,18 +1319,14 @@ def _add_symbol(self, name, tag, intrinsic_type, arg, rank):
# existing tag may occur which we can safely ignore.
pass
- def _invoke_declarations(self, parent):
+ def invoke_declarations(self):
'''
Insert declarations of all proxy-related quantities into the PSy layer.
- :param parent: the node in the f2pygen AST representing the PSy- \
- layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
+ super().invoke_declarations()
const = LFRicConstants()
- const_mod = const.UTILITIES_MOD_MAP["constants"]["module"]
- table = self._symbol_table
+ table = self.symtab
# Declarations of real and integer field proxies
@@ -1337,36 +1355,20 @@ def _invoke_declarations(self, parent):
# Add the Invoke subroutine declarations for the different
# field-type proxies
for (fld_type, fld_mod), args in field_datatype_map.items():
- arg_list = [arg.proxy_declaration_name for arg in args]
- parent.add(TypeDeclGen(parent, datatype=fld_type,
- entity_decls=arg_list))
- (self._invoke.invokes.psy.
- infrastructure_modules[fld_mod].add(fld_type))
-
- # Create declarations for the pointers to the internal
- # data arrays.
+ fld_mod_symbol = table.node.parent.symbol_table.lookup(fld_mod)
+ fld_type_sym = table.node.parent.symbol_table.new_symbol(
+ fld_type,
+ symbol_type=DataTypeSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(fld_mod_symbol))
for arg in args:
- (self._invoke.invokes.psy.infrastructure_modules[const_mod].
- add(arg.precision))
- suffix = const.ARG_TYPE_SUFFIX_MAPPING[arg.argument_type]
- if arg.vector_size > 1:
- entity_names = []
- for idx in range(1, arg.vector_size+1):
- ttext = f"{arg.name}_{idx}:{suffix}"
- vsym = table.lookup_with_tag(ttext)
- entity_names.append(vsym.name)
+ if arg._vector_size > 1:
+ decl_type = ArrayType(fld_type_sym, [arg._vector_size])
else:
- ttext = f"{arg.name}:{suffix}"
- sym = table.lookup_with_tag(ttext)
- entity_names = [sym.name]
- if entity_names:
- parent.add(
- DeclGen(
- parent, datatype=arg.intrinsic_type,
- kind=arg.precision, dimension=":",
- entity_decls=[f"{name} => null()" for
- name in entity_names],
- pointer=True))
+ decl_type = fld_type_sym
+ table.new_symbol(arg.proxy_name,
+ symbol_type=DataSymbol,
+ datatype=decl_type)
# Declarations of LMA operator proxies
op_args = self._invoke.unique_declarations(
@@ -1383,59 +1385,50 @@ def _invoke_declarations(self, parent):
# Declare the operator proxies
for operator_datatype, operators_list in \
operators_datatype_map.items():
- operators_names = [arg.proxy_declaration_name for
- arg in operators_list]
- parent.add(TypeDeclGen(parent, datatype=operator_datatype,
- entity_decls=operators_names))
- for arg in operators_list:
- name = arg.name
- suffix = const.ARG_TYPE_SUFFIX_MAPPING[arg.argument_type]
- ttext = f"{name}:{suffix}"
- sym = table.lookup_with_tag(ttext)
- # Declare the pointer to the stencil array.
- parent.add(DeclGen(parent, datatype="real",
- kind=arg.precision,
- dimension=":,:,:",
- entity_decls=[f"{sym.name} => null()"],
- pointer=True))
- op_mod = operators_list[0].module_name
- # Ensure the appropriate derived datatype will be imported.
- (self._invoke.invokes.psy.infrastructure_modules[op_mod].
- add(operator_datatype))
- # Ensure the appropriate kind parameter will be imported.
- (self._invoke.invokes.psy.infrastructure_modules[const_mod].
- add(arg.precision))
+ mod_name = operators_list[0].module_name
+ mod_st = table.node.parent.symbol_table
+ fld_mod_symbol = mod_st.lookup(mod_name)
+ op_datatype_symbol = mod_st.find_or_create(
+ operator_datatype,
+ symbol_type=DataTypeSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(fld_mod_symbol))
+ for op in operators_list:
+ table.new_symbol(op.proxy_declaration_name,
+ symbol_type=DataSymbol,
+ datatype=op_datatype_symbol)
# Declarations of CMA operator proxies
cma_op_args = self._invoke.unique_declarations(
argument_types=["gh_columnwise_operator"])
- cma_op_proxy_decs = [arg.proxy_declaration_name for
- arg in cma_op_args]
- if cma_op_proxy_decs:
+ if cma_op_args:
op_type = cma_op_args[0].proxy_data_type
- op_mod = cma_op_args[0].module_name
- parent.add(TypeDeclGen(parent,
- datatype=op_type,
- entity_decls=cma_op_proxy_decs))
- (self._invoke.invokes.psy.infrastructure_modules[op_mod].
- add(op_type))
+ mod_name = cma_op_args[0].module_name
+ mod_st = table.node.parent.symbol_table
+ fld_mod_symbol = mod_st.lookup(mod_name)
+ op_datatype_symbol = mod_st.find_or_create(
+ op_type,
+ symbol_type=DataTypeSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(fld_mod_symbol))
+ for arg in cma_op_args:
+ table.new_symbol(arg.proxy_declaration_name,
+ symbol_type=DataSymbol,
+ datatype=op_datatype_symbol)
- def initialise(self, parent):
+ def initialise(self, cursor: int) -> int:
'''
Insert code into the PSy layer to initialise all necessary proxies.
- :param parent: node in the f2pygen AST representing the PSy-layer
- routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
:raises InternalError: if a kernel argument of an unrecognised type
is encountered.
'''
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent,
- " Initialise field and/or operator proxies"))
- parent.add(CommentGen(parent, ""))
+ init_cursor = cursor
for arg in self._invoke.psy_unique_vars:
# We don't have proxies for scalars
if arg.is_scalar:
@@ -1449,40 +1442,65 @@ def initialise(self, parent):
# 1 to the vector size which is what we
# require in our Fortran code
for idx in range(1, arg.vector_size+1):
- parent.add(
- AssignGen(parent,
- lhs=arg.proxy_name+"("+str(idx)+")",
- rhs=arg.name+"("+str(idx)+")%get_proxy()"))
- name = self._symbol_table.lookup_with_tag(
- f"{arg.name}_{idx}:{suffix}").name
- parent.add(
- AssignGen(parent,
- lhs=name,
- rhs=f"{arg.proxy_name}({idx})%data",
- pointer=True))
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=ArrayReference.create(
+ self.symtab.lookup(arg.proxy_name),
+ [Literal(str(idx), INTEGER_TYPE)]),
+ rhs=Call.create(ArrayOfStructuresReference.create(
+ self.symtab.lookup(arg.name),
+ [Literal(str(idx), INTEGER_TYPE)],
+ ["get_proxy"]))),
+ cursor)
+ cursor += 1
+ symbol = self.symtab.lookup_with_tag(
+ f"{arg.name}_{idx}:{suffix}")
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(symbol),
+ rhs=ArrayOfStructuresReference.create(
+ self.symtab.lookup(arg.proxy_name),
+ [Literal(str(idx), INTEGER_TYPE)],
+ ["data"]),
+ is_pointer=True),
+ cursor)
+ cursor += 1
else:
- parent.add(AssignGen(parent, lhs=arg.proxy_name,
- rhs=arg.name+"%get_proxy()"))
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(
+ self.symtab.find_or_create(arg.proxy_name)),
+ rhs=Call.create(StructureReference.create(
+ self.symtab.lookup(arg.name), ["get_proxy"]))),
+ cursor)
+ cursor += 1
if arg.is_field:
- name = self._symbol_table.lookup_with_tag(
- f"{arg.name}:{suffix}").name
- parent.add(
- AssignGen(parent,
- lhs=name,
- rhs=f"{arg.proxy_name}%data",
- pointer=True))
+ symbol = self.symtab.lookup_with_tag(
+ f"{arg.name}:{suffix}")
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(symbol),
+ rhs=StructureReference.create(
+ self.symtab.lookup(arg.proxy_name), ["data"]),
+ is_pointer=True),
+ cursor)
+ cursor += 1
elif arg.is_operator:
if arg.argument_type == "gh_columnwise_operator":
# CMA operator arguments are handled in DynCMAOperators
pass
elif arg.argument_type == "gh_operator":
- name = self._symbol_table.lookup_with_tag(
- f"{arg.name}:{suffix}").name
- parent.add(
- AssignGen(parent,
- lhs=name,
- rhs=f"{arg.proxy_name}%local_stencil",
- pointer=True))
+ symbol = self.symtab.lookup_with_tag(
+ f"{arg.name}:{suffix}")
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(symbol),
+ rhs=StructureReference.create(
+ self.symtab.lookup(arg.proxy_name),
+ ["local_stencil"]),
+ is_pointer=True),
+ cursor)
+ cursor += 1
else:
raise InternalError(
f"Kernel argument '{arg.name}' is a recognised "
@@ -1493,44 +1511,77 @@ def initialise(self, parent):
f"Kernel argument '{arg.name}' of type "
f"'{arg.argument_type}' not "
f"handled in DynProxies.initialise()")
+ if cursor > init_cursor:
+ self._invoke.schedule[init_cursor].preceding_comment = (
+ "Initialise field and/or operator proxies")
+
+ return cursor
class DynLMAOperators(LFRicCollection):
'''
Handles all entities associated with Local-Matrix-Assembly Operators.
'''
- def _stub_declarations(self, parent):
+ def stub_declarations(self):
'''
- Declare all LMA-related quantities in a Kernel stub.
-
- :param parent: the f2pygen node representing the Kernel stub.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ Declare all LMA-related quantities in a Kernel stub. Note that argument
+ order will be defined later by ArgOrdering.
'''
- api_config = Config.get().api_conf("lfric")
-
+ super().stub_declarations()
lma_args = psyGen.args_filter(
self._kernel.arguments.args, arg_types=["gh_operator"])
if lma_args:
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", entity_decls=["cell"]))
+ arg = self.symtab.find_or_create(
+ "cell", symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ arg.interface = ArgumentInterface(ArgumentInterface.Access.READ)
+ self.symtab.append_argument(arg)
for arg in lma_args:
size = arg.name+"_ncell_3d"
op_dtype = arg.intrinsic_type
op_kind = arg.precision
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", entity_decls=[size]))
- ndf_name_to = arg.function_space_to.ndf_name
- ndf_name_from = arg.function_space_from.ndf_name
- parent.add(DeclGen(parent, datatype=op_dtype, kind=op_kind,
- dimension=",".join([size, ndf_name_to,
- ndf_name_from]),
- intent=arg.intent,
- entity_decls=[arg.name]))
-
- def _invoke_declarations(self, parent):
+ size_sym = self.symtab.find_or_create(
+ size, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ size_sym.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(size_sym)
+ ndf_name_to = self.symtab.lookup(
+ arg.function_space_to.ndf_name)
+ ndf_name_from = self.symtab.lookup(
+ arg.function_space_from.ndf_name)
+
+ # Create the PSyIR intrinsic DataType
+ kind_sym = self.symtab.find_or_create(
+ op_kind, symbol_type=DataSymbol, datatype=UnresolvedType(),
+ interface=ImportInterface(
+ self.symtab.lookup("constants_mod")))
+ if op_dtype == "real":
+ intr_type = ScalarType(ScalarType.Intrinsic.REAL, kind_sym)
+ elif op_dtype == "integer":
+ intr_type = ScalarType(ScalarType.Intrinsic.INTEGER, kind_sym)
+ else:
+ raise NotImplementedError(
+ f"Only REAL and INTEGER LMA Operator types are supported, "
+ f"but found '{op_dtype}'")
+ if arg.intent == "in":
+ intent = ArgumentInterface.Access.READ
+ elif arg.intent == "inout":
+ intent = ArgumentInterface.Access.READWRITE
+ # No need for else as arg.intent only returns in/inout or errors
+
+ arg_sym = self.symtab.find_or_create(
+ arg.name, symbol_type=DataSymbol,
+ datatype=ArrayType(intr_type, [
+ Reference(size_sym),
+ Reference(ndf_name_to),
+ Reference(ndf_name_from),
+ ]))
+ arg_sym.interface = ArgumentInterface(intent)
+ self.symtab.append_argument(arg_sym)
+
+ def invoke_declarations(self):
'''
Declare all LMA-related quantities in a PSy-layer routine.
Note: PSy layer in LFRic does not modify the LMA operator objects.
@@ -1538,32 +1589,15 @@ def _invoke_declarations(self, parent):
kernels is only pointed to from the LMA operator object and is thus
not a part of the object).
- :param parent: the f2pygen node representing the PSy-layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
+ super().invoke_declarations()
# Add the Invoke subroutine argument declarations for operators
op_args = self._invoke.unique_declarations(
- argument_types=["gh_operator"])
- # Filter operators by their datatype
- operators_datatype_map = OrderedDict()
- for op_arg in op_args:
- try:
- operators_datatype_map[op_arg.data_type].append(op_arg)
- except KeyError:
- # This datatype has not been seen before so create new entry
- operators_datatype_map[op_arg.data_type] = [op_arg]
- # Declare the operators
- for op_datatype, op_list in operators_datatype_map.items():
- operators_names = [arg.declaration_name for arg in op_list]
- parent.add(TypeDeclGen(
- parent, datatype=op_datatype,
- entity_decls=operators_names, intent="in"))
- op_mod = op_list[0].module_name
- # Record that we will need to import this operator
- # datatype from the appropriate infrastructure module
- (self._invoke.invokes.psy.infrastructure_modules[op_mod].
- add(op_datatype))
+ argument_types=["gh_operator"])
+ # Update the operator intents
+ for arg in op_args:
+ symbol = self.symtab.lookup(arg.declaration_name)
+ symbol.interface = ArgumentInterface(ArgumentInterface.Access.READ)
class DynCMAOperators(LFRicCollection):
@@ -1601,7 +1635,7 @@ def __init__(self, node):
# You can't index into an OrderedDict so we keep a separate ref
# to the first CMA argument we find.
self._first_cma_arg = None
- for call in self._calls:
+ for call in self.kernel_calls:
if call.cma_operation:
# Get a list of all of the CMA arguments to this call
cma_args = psyGen.args_filter(
@@ -1628,73 +1662,65 @@ def __init__(self, node):
if not self._first_cma_arg:
self._first_cma_arg = arg
- # Create all the necessary Symbols here so that they are available
- # without the need to do a 'gen'.
- symtab = self._symbol_table
- const = LFRicConstants()
- suffix = const.ARG_TYPE_SUFFIX_MAPPING["gh_columnwise_operator"]
- for op_name in self._cma_ops:
- new_name = self._symbol_table.next_available_name(
- f"{op_name}_{suffix}")
- tag = f"{op_name}:{suffix}"
- arg = self._cma_ops[op_name]["arg"]
- precision = LFRicConstants().precision_for_type(arg.data_type)
- array_type = ArrayType(
- LFRicTypes("LFRicRealScalarDataType")(precision),
- [ArrayType.Extent.DEFERRED]*3)
- index_str = ",".join(3*[":"])
- dtype = UnsupportedFortranType(
- f"real(kind={arg.precision}), pointer, "
- f"dimension({index_str}) :: {new_name} => null()",
- partial_datatype=array_type)
- symtab.new_symbol(new_name,
- symbol_type=DataSymbol,
- datatype=dtype,
- tag=tag)
- # Now the various integer parameters of the operator.
- for param in self._cma_ops[op_name]["params"]:
- symtab.find_or_create_integer_symbol(
- f"{op_name}_{param}", tag=f"{op_name}:{param}:{suffix}")
-
- def initialise(self, parent):
+ def initialise(self, cursor: int) -> int:
'''
Generates the calls to the LFRic infrastructure that look-up
the various components of each CMA operator. Adds these as
children of the supplied parent node.
- :param parent: f2pygen node representing the PSy-layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
'''
# If we have no CMA operators then we do nothing
if not self._cma_ops:
- return
-
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent,
- " Look-up information for each CMA operator"))
- parent.add(CommentGen(parent, ""))
+ return cursor
const = LFRicConstants()
suffix = const.ARG_TYPE_SUFFIX_MAPPING["gh_columnwise_operator"]
+ first = True
for op_name in self._cma_ops:
# First, assign a pointer to the array containing the actual
# matrix.
- cma_name = self._symbol_table.lookup_with_tag(
- f"{op_name}:{suffix}").name
- parent.add(AssignGen(parent, lhs=cma_name, pointer=True,
- rhs=self._cma_ops[op_name]["arg"].
- proxy_name_indexed+"%columnwise_matrix"))
+ cma_name = self.symtab.find_or_create_tag(
+ f"{op_name}:{suffix}", op_name,
+ symbol_type=DataSymbol, datatype=UnresolvedType())
+ stmt = Assignment.create(
+ lhs=Reference(cma_name),
+ rhs=StructureReference.create(
+ self.symtab.lookup(
+ self._cma_ops[op_name]["arg"].proxy_name),
+ ["columnwise_matrix"]),
+ is_pointer=True)
+ if first:
+ stmt.preceding_comment = (
+ "Look-up information for each CMA operator"
+ )
+ first = False
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
# Then make copies of the related integer parameters
for param in self._cma_ops[op_name]["params"]:
- param_name = self._symbol_table.find_or_create_tag(
- f"{op_name}:{param}:{suffix}").name
- parent.add(AssignGen(parent, lhs=param_name,
- rhs=self._cma_ops[op_name]["arg"].
- proxy_name_indexed+"%"+param))
-
- def _invoke_declarations(self, parent):
+ param_name = self.symtab.find_or_create_tag(
+ f"{op_name}:{param}:{suffix}",
+ root_name=f"{op_name}_{param}",
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ )
+ stmt = Assignment.create(
+ lhs=Reference(param_name),
+ rhs=StructureReference.create(
+ self.symtab.lookup(
+ self._cma_ops[op_name]["arg"].proxy_name),
+ [param]),
+ )
+ self._invoke.schedule.addchild(stmt, cursor)
+ cursor += 1
+ return cursor
+
+ def invoke_declarations(self):
'''
Generate the necessary PSy-layer declarations for all column-wise
operators and their associated parameters.
@@ -1703,87 +1729,69 @@ def _invoke_declarations(self, parent):
kernels is only pointed to from the column-wise operator object and is
thus not a part of the object).
- :param parent: the f2pygen node representing the PSy-layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
- api_config = Config.get().api_conf("lfric")
-
+ super().invoke_declarations()
# If we have no CMA operators then we do nothing
if not self._cma_ops:
return
- # Add the Invoke subroutine argument declarations for column-wise
- # operators
- cma_op_args = self._invoke.unique_declarations(
- argument_types=["gh_columnwise_operator"])
- # Create a list of column-wise operator names
- cma_op_arg_list = [arg.declaration_name for arg in cma_op_args]
- if cma_op_arg_list:
- op_type = cma_op_args[0].data_type
- op_mod = cma_op_args[0].module_name
- parent.add(TypeDeclGen(parent,
- datatype=op_type,
- entity_decls=cma_op_arg_list,
- intent="in"))
- (self._invoke.invokes.psy.infrastructure_modules[op_mod].
- add(op_type))
-
const = LFRicConstants()
suffix = const.ARG_TYPE_SUFFIX_MAPPING["gh_columnwise_operator"]
for op_name in self._cma_ops:
- # Declare the operator matrix itself.
- tag_name = f"{op_name}:{suffix}"
- cma_name = self._symbol_table.lookup_with_tag(tag_name).name
- cma_dtype = self._cma_ops[op_name]["datatype"]
- cma_kind = self._cma_ops[op_name]["kind"]
- parent.add(DeclGen(parent, datatype=cma_dtype,
- kind=cma_kind, pointer=True,
- dimension=":,:,:",
- entity_decls=[f"{cma_name} => null()"]))
- const = LFRicConstants()
- const_mod = const.UTILITIES_MOD_MAP["constants"]["module"]
- const_mod_uses = self._invoke.invokes.psy. \
- infrastructure_modules[const_mod]
- # Record that we will need to import the kind of this
- # cma operator from the appropriate infrastructure
- # module
- const_mod_uses.add(cma_kind)
+ new_name = self.symtab.next_available_name(
+ f"{op_name}_{suffix}")
+ tag = f"{op_name}:{suffix}"
+ arg = self._cma_ops[op_name]["arg"]
+ precision = LFRicConstants().precision_for_type(arg.data_type)
+ array_type = ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(precision),
+ [ArrayType.Extent.DEFERRED]*3)
+ index_str = ",".join(3*[":"])
+ dtype = UnsupportedFortranType(
+ f"real(kind={arg.precision}), pointer, "
+ f"dimension({index_str}) :: {new_name} => null()",
+ partial_datatype=array_type)
+ self.symtab.new_symbol(new_name,
+ symbol_type=DataSymbol,
+ datatype=dtype,
+ tag=tag)
# Declare the associated integer parameters
- param_names = []
for param in self._cma_ops[op_name]["params"]:
name = f"{op_name}_{param}"
tag = f"{op_name}:{param}:{suffix}"
- sym = self._symbol_table.find_or_create_integer_symbol(
- name, tag=tag)
- param_names.append(sym.name)
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=param_names))
+ self.symtab.find_or_create(
+ name, tag=tag,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ )
- def _stub_declarations(self, parent):
+ def stub_declarations(self):
'''
Generate all necessary declarations for CMA operators being passed to
a Kernel stub.
-
- :param parent: f2pygen node representing the Kernel stub.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ Note that argument order is redefined later by ArgOrdering.
'''
- api_config = Config.get().api_conf("lfric")
-
+ super().stub_declarations()
# If we have no CMA operators then we do nothing
if not self._cma_ops:
return
- symtab = self._symbol_table
+ symtab = self.symtab
# CMA operators always need the current cell index and the number
# of columns in the mesh
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", entity_decls=["cell", "ncell_2d"]))
+ symbol = symtab.find_or_create(
+ "cell", symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ symbol.interface = ArgumentInterface(ArgumentInterface.Access.READ)
+ symtab.append_argument(symbol)
+ symbol = symtab.find_or_create(
+ "ncell_2d", symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ symbol.interface = ArgumentInterface(ArgumentInterface.Access.READ)
+ symtab.append_argument(symbol)
const = LFRicConstants()
suffix = const.ARG_TYPE_SUFFIX_MAPPING["gh_columnwise_operator"]
@@ -1792,29 +1800,45 @@ def _stub_declarations(self, parent):
# Declare the associated scalar arguments before the array because
# some of them are used to dimension the latter (and some compilers
# get upset if this ordering is not followed)
- _local_args = []
for param in self._cma_ops[op_name]["params"]:
- param_name = symtab.find_or_create_tag(
+ symbol = symtab.find_or_create_tag(
f"{op_name}:{param}:{suffix}",
- root_name=f"{op_name}_{param}").name
- _local_args.append(param_name)
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", entity_decls=_local_args))
+ root_name=f"{op_name}_{param}",
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ symbol.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ symtab.append_argument(symbol)
# Declare the array that holds the CMA operator
bandwidth = symtab.find_or_create_tag(
f"{op_name}:bandwidth:{suffix}",
- root_name=f"{op_name}_bandwidth").name
+ root_name=f"{op_name}_bandwidth",
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ bandwidth.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+
nrow = symtab.find_or_create_tag(
f"{op_name}:nrow:{suffix}",
- root_name=f"{op_name}_nrow").name
- intent = self._cma_ops[op_name]["intent"]
- op_dtype = self._cma_ops[op_name]["datatype"]
- op_kind = self._cma_ops[op_name]["kind"]
- parent.add(DeclGen(parent, datatype=op_dtype, kind=op_kind,
- dimension=",".join([bandwidth,
- nrow, "ncell_2d"]),
- intent=intent, entity_decls=[op_name]))
+ root_name=f"{op_name}_nrow",
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ nrow.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+
+ op = symtab.find_or_create(
+ op_name, symbol_type=DataSymbol,
+ datatype=ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [Reference(bandwidth), Reference(nrow),
+ Reference(symtab.lookup("ncell_2d"))]))
+ if self._kernel.cma_operation == 'assembly':
+ op.interface = ArgumentInterface(
+ ArgumentInterface.Access.READWRITE)
+ else:
+ op.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ symtab.append_argument(op)
class DynMeshes():
@@ -1845,10 +1869,8 @@ def __init__(self, invoke, unique_psy_vars):
# Whether or not the associated Invoke requires colourmap information
self._needs_colourmap = False
self._needs_colourmap_halo = False
- # Keep a reference to the InvokeSchedule so we can check for colouring
- # later
- self._schedule = invoke.schedule
- self._symbol_table = self._schedule.symbol_table
+ # Keep a reference to the Invoke so we can check its properties later
+ self._invoke = invoke
# Set used to generate a list of the unique mesh objects
_name_set = set()
@@ -1866,7 +1888,7 @@ def __init__(self, invoke, unique_psy_vars):
# message if necessary.
non_intergrid_kernels = []
has_intergrid = False
- for call in self._schedule.coded_kernels():
+ for call in self._invoke.schedule.coded_kernels():
if (call.reference_element.properties or call.mesh.properties or
call.iterates_over == "domain" or call.cma_operation):
@@ -1906,6 +1928,14 @@ def __init__(self, invoke, unique_psy_vars):
self._add_mesh_symbols(list(_name_set))
+ @property
+ def symtab(self):
+ '''
+ :returns: associated symbol table.
+ :rtype: :py:class:`psyclone.psyir.symbols.SymbolTable`
+ '''
+ return self._invoke.schedule.symbol_table
+
def _add_mesh_symbols(self, mesh_tags):
'''
Add DataSymbols for the supplied list of mesh names and store the
@@ -1930,26 +1960,31 @@ def _add_mesh_symbols(self, mesh_tags):
mmod = const.MESH_TYPE_MAP["mesh"]["module"]
mtype = const.MESH_TYPE_MAP["mesh"]["type"]
# Create a Container symbol for the module
- csym = self._symbol_table.find_or_create_tag(
+ csym = self.symtab.find_or_create_tag(
mmod, symbol_type=ContainerSymbol)
# Create a TypeSymbol for the mesh type
- mtype_sym = self._symbol_table.find_or_create_tag(
+ mtype_sym = self.symtab.find_or_create_tag(
mtype, symbol_type=DataTypeSymbol,
datatype=UnresolvedType(),
interface=ImportInterface(csym))
name_list = []
for name in mesh_tags:
- name_list.append(self._symbol_table.find_or_create_tag(
- name, symbol_type=DataSymbol, datatype=mtype_sym).name)
+ dt = UnsupportedFortranType(
+ f"type({mtype_sym.name}), pointer :: {name} => null()")
+ name_list.append(self.symtab.find_or_create_tag(
+ name, symbol_type=DataSymbol, datatype=dt).name)
if Config.get().distributed_memory:
# If distributed memory is enabled then we require a variable
# holding the maximum halo depth for each mesh.
for name in mesh_tags:
var_name = f"max_halo_depth_{name}"
- self._symbol_table.find_or_create_integer_symbol(
- var_name, tag=var_name)
+ self.symtab.find_or_create(
+ var_name, tag=var_name,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ )
def colourmap_init(self):
'''
@@ -1961,9 +1996,8 @@ def colourmap_init(self):
# pylint: disable=too-many-locals
const = LFRicConstants()
non_intergrid_kern = None
- sym_tab = self._schedule.symbol_table
- for call in [call for call in self._schedule.coded_kernels() if
+ for call in [call for call in self._invoke.schedule.coded_kernels() if
call.is_coloured()]:
# Keep a record of whether or not any kernels (loops) in this
# invoke have been coloured and, if so, whether the associated loop
@@ -1984,27 +2018,48 @@ def colourmap_init(self):
carg_name = call._intergrid_ref.coarse.name
# Colour map
base_name = "cmap_" + carg_name
- colour_map = sym_tab.find_or_create_array(
- base_name, 2, ScalarType.Intrinsic.INTEGER,
+ array_type = ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2)
+ colour_map = self.symtab.find_or_create(
+ base_name,
+ symbol_type=DataSymbol,
+ datatype=UnsupportedFortranType(
+ f"integer(kind=i_def), pointer, dimension(:,:) :: "
+ f"{base_name} => null()",
+ partial_datatype=array_type),
tag=base_name)
# No. of colours
base_name = "ncolour_" + carg_name
- ncolours = sym_tab.find_or_create_integer_symbol(
- base_name, tag=base_name)
+ ncolours = self.symtab.find_or_create(
+ base_name, tag=base_name,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ )
# Array holding the last cell of a given colour.
if (Config.get().distributed_memory and
not call.all_updates_are_writes):
# This will require a loop into the halo and so the array is
# 2D (indexed by colour *and* halo depth).
base_name = "last_halo_cell_all_colours_" + carg_name
- last_cell = self._schedule.symbol_table.find_or_create_array(
- base_name, 2, ScalarType.Intrinsic.INTEGER, tag=base_name)
+ last_cell = self.symtab.find_or_create(
+ base_name,
+ symbol_type=DataSymbol,
+ datatype=ArrayType(
+ LFRicTypes("LFRicIntegerScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2),
+ tag=base_name)
else:
# Array holding the last edge cell of a given colour. Just 1D
# as indexed by colour only.
base_name = "last_edge_cell_all_colours_" + carg_name
- last_cell = self._schedule.symbol_table.find_or_create_array(
- base_name, 1, ScalarType.Intrinsic.INTEGER, tag=base_name)
+ last_cell = self.symtab.find_or_create(
+ base_name,
+ symbol_type=DataSymbol,
+ datatype=ArrayType(
+ LFRicTypes("LFRicIntegerScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*1),
+ tag=base_name)
# Add these symbols into the DynInterGrid entry for this kernel
call._intergrid_ref.set_colour_info(colour_map, ncolours,
last_cell)
@@ -2018,295 +2073,275 @@ def colourmap_init(self):
# don't already have one.
colour_map = non_intergrid_kern.colourmap
# No. of colours
- ncolours = sym_tab.find_or_create_integer_symbol(
- "ncolour", tag="ncolour").name
+ ncolours = self.symtab.find_or_create(
+ "ncolour", tag="ncolour",
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ ).name
if self._needs_colourmap_halo:
- sym_tab.find_or_create_array(
- "last_halo_cell_all_colours", 2,
- ScalarType.Intrinsic.INTEGER,
+ self.symtab.find_or_create(
+ "last_halo_cell_all_colours",
+ symbol_type=DataSymbol,
+ datatype=ArrayType(
+ LFRicTypes("LFRicIntegerScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2),
tag="last_halo_cell_all_colours")
if self._needs_colourmap:
- sym_tab.find_or_create_array(
- "last_edge_cell_all_colours", 1,
- ScalarType.Intrinsic.INTEGER,
+ self.symtab.find_or_create(
+ "last_edge_cell_all_colours",
+ symbol_type=DataSymbol,
+ datatype=ArrayType(
+ LFRicTypes("LFRicIntegerScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*1),
tag="last_edge_cell_all_colours")
- def declarations(self, parent):
+ def invoke_declarations(self):
'''
Declare variables specific to mesh objects.
- :param parent: the parent node to which to add the declarations
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
-
'''
# pylint: disable=too-many-locals, too-many-statements
- api_config = Config.get().api_conf("lfric")
const = LFRicConstants()
- # We'll need various typedefs from the mesh module
- mtype = const.MESH_TYPE_MAP["mesh"]["type"]
- mmod = const.MESH_TYPE_MAP["mesh"]["module"]
- mmap_type = const.MESH_TYPE_MAP["mesh_map"]["type"]
- mmap_mod = const.MESH_TYPE_MAP["mesh_map"]["module"]
- if self._mesh_tag_names:
- name = self._symbol_table.lookup_with_tag(mtype).name
- parent.add(UseGen(parent, name=mmod, only=True,
- funcnames=[name]))
if self.intergrid_kernels:
- parent.add(UseGen(parent, name=mmap_mod, only=True,
- funcnames=[mmap_type]))
- # Declare the mesh object(s) and associated halo depths
- for tag_name in self._mesh_tag_names:
- name = self._symbol_table.lookup_with_tag(tag_name).name
- parent.add(TypeDeclGen(parent, pointer=True, datatype=mtype,
- entity_decls=[name + " => null()"]))
- # For each mesh we also need the maximum halo depth.
- if Config.get().distributed_memory:
- name = self._symbol_table.lookup_with_tag(
- f"max_halo_depth_{tag_name}").name
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=[name]))
-
- # Declare the inter-mesh map(s) and cell map(s)
- for kern in self.intergrid_kernels:
- parent.add(TypeDeclGen(parent, pointer=True,
- datatype=mmap_type,
- entity_decls=[kern.mmap + " => null()"]))
- parent.add(
- DeclGen(parent, pointer=True, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=[kern.cell_map + "(:,:,:) => null()"]))
-
- # Declare the number of cells in the fine mesh and how many fine
- # cells there are per coarse cell
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=[kern.ncell_fine,
- kern.ncellpercellx,
- kern.ncellpercelly]))
- # Declare variables to hold the colourmap information if required
- if kern.colourmap_symbol:
- parent.add(
- DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- pointer=True,
- entity_decls=[kern.colourmap_symbol.name+"(:,:)"]))
- parent.add(
- DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=[kern.ncolours_var_symbol.name]))
- # The cell-count array is 2D if we go into the halo and 1D
- # otherwise (i.e. no DM or this kernel is GH_WRITE only and
- # does not access the halo).
- dim_list = len(kern.last_cell_var_symbol.datatype.shape)*":"
- decln = (f"{kern.last_cell_var_symbol.name}("
- f"{','.join(dim_list)})")
- parent.add(
- DeclGen(parent, datatype="integer", allocatable=True,
- kind=api_config.default_kind["integer"],
- entity_decls=[decln]))
+ mmap_type = const.MESH_TYPE_MAP["mesh_map"]["type"]
+ mmap_mod = const.MESH_TYPE_MAP["mesh_map"]["module"]
+ # Create a Container symbol for the module
+ csym = self.symtab.find_or_create_tag(
+ mmap_mod, symbol_type=ContainerSymbol)
+ # Create a TypeSymbol for the mesh type
+ self.symtab.find_or_create_tag(
+ mmap_type, symbol_type=DataTypeSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(csym))
if not self.intergrid_kernels and (self._needs_colourmap or
self._needs_colourmap_halo):
# There aren't any inter-grid kernels but we do need
# colourmap information
- base_name = "cmap"
- csym = self._schedule.symbol_table.lookup_with_tag("cmap")
- colour_map = csym.name
- # No. of colours
- base_name = "ncolour"
- ncolours = \
- self._schedule.symbol_table.find_or_create_tag(base_name).name
+ csym = self.symtab.lookup_with_tag("cmap")
# Add declarations for these variables
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- pointer=True,
- entity_decls=[colour_map+"(:,:)"]))
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=[ncolours]))
if self._needs_colourmap_halo:
- last_cell = self._symbol_table.find_or_create_tag(
+ self.symtab.find_or_create_tag(
"last_halo_cell_all_colours")
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- allocatable=True,
- entity_decls=[last_cell.name+"(:,:)"]))
if self._needs_colourmap:
- last_cell = self._symbol_table.find_or_create_tag(
+ self.symtab.find_or_create_tag(
"last_edge_cell_all_colours")
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- allocatable=True,
- entity_decls=[last_cell.name+"(:)"]))
- def initialise(self, parent):
+ def initialise(self, cursor: int) -> int:
'''
Initialise parameters specific to inter-grid kernels.
- :param parent: the parent node to which to add the initialisations.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
+ :param cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
'''
# pylint: disable=too-many-branches
# If we haven't got any need for a mesh in this invoke then we
# don't do anything
if not self._mesh_tag_names:
- return
+ return cursor
- parent.add(CommentGen(parent, ""))
+ symtab = self._invoke.schedule.symbol_table
if len(self._mesh_tag_names) == 1:
# We only require one mesh object which means that this invoke
# contains no inter-grid kernels (which would require at least 2)
- parent.add(CommentGen(parent, " Create a mesh object"))
- parent.add(CommentGen(parent, ""))
- rhs = "%".join([self._first_var.proxy_name_indexed,
- self._first_var.ref_name(), "get_mesh()"])
- mesh_name = self._symbol_table.lookup_with_tag(
- self._mesh_tag_names[0]).name
- parent.add(AssignGen(parent, pointer=True, lhs=mesh_name, rhs=rhs))
+ mesh_sym = symtab.lookup_with_tag(self._mesh_tag_names[0])
+ assignment = Assignment.create(
+ lhs=Reference(mesh_sym),
+ rhs=self._first_var.generate_method_call("get_mesh"),
+ is_pointer=True)
+ assignment.preceding_comment = "Create a mesh object"
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
if Config.get().distributed_memory:
# If distributed memory is enabled then we need the maximum
# halo depth.
- depth_name = self._symbol_table.lookup_with_tag(
- f"max_halo_depth_{self._mesh_tag_names[0]}").name
- parent.add(AssignGen(parent, lhs=depth_name,
- rhs=f"{mesh_name}%get_halo_depth()"))
+ depth_sym = self.symtab.lookup_with_tag(
+ f"max_halo_depth_{self._mesh_tag_names[0]}")
+ self._invoke.schedule.addchild(Assignment.create(
+ lhs=Reference(depth_sym),
+ rhs=Call.create(StructureReference.create(
+ mesh_sym, ["get_halo_depth"]))),
+ cursor)
+ cursor += 1
if self._needs_colourmap or self._needs_colourmap_halo:
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " Get the colourmap"))
- parent.add(CommentGen(parent, ""))
# Look-up variable names for colourmap and number of colours
- colour_map = self._schedule.symbol_table.find_or_create_tag(
- "cmap").name
- ncolour = \
- self._schedule.symbol_table.find_or_create_tag("ncolour")\
- .name
+ cmap = self.symtab.find_or_create_tag("cmap")
+ ncolour = self.symtab.find_or_create_tag("ncolour")
# Get the number of colours
- parent.add(AssignGen(
- parent, lhs=ncolour, rhs=f"{mesh_name}%get_ncolours()"))
+ assignment = Assignment.create(
+ lhs=Reference(ncolour),
+ rhs=Call.create(StructureReference.create(
+ mesh_sym, ["get_ncolours"])))
+ assignment.preceding_comment = "Get the colourmap"
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
# Get the colour map
- parent.add(AssignGen(parent, pointer=True, lhs=colour_map,
- rhs=f"{mesh_name}%get_colour_map()"))
- return
-
- parent.add(CommentGen(
- parent,
- " Look-up mesh objects and loop limits for inter-grid kernels"))
- parent.add(CommentGen(parent, ""))
+ assignment = Assignment.create(
+ lhs=Reference(cmap),
+ rhs=Call.create(StructureReference.create(
+ mesh_sym, ["get_colour_map"])),
+ is_pointer=True)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
# Keep a list of quantities that we've already initialised so
# that we don't generate duplicate assignments
initialised = []
+ comment_cursor = cursor
# Loop over the DynInterGrid objects
for dig in self.intergrid_kernels:
# We need pointers to both the coarse and the fine mesh as well
# as the maximum halo depth for each.
- fine_mesh = self._schedule.symbol_table.find_or_create_tag(
- f"mesh_{dig.fine.name}").name
- coarse_mesh = self._schedule.symbol_table.find_or_create_tag(
- f"mesh_{dig.coarse.name}").name
+ fine_mesh = self.symtab.find_or_create_tag(f"mesh_{dig.fine.name}")
+ coarse_mesh = self.symtab.find_or_create_tag(
+ f"mesh_{dig.coarse.name}")
if fine_mesh not in initialised:
initialised.append(fine_mesh)
- parent.add(
- AssignGen(parent, pointer=True,
- lhs=fine_mesh,
- rhs="%".join([dig.fine.proxy_name_indexed,
- dig.fine.ref_name(),
- "get_mesh()"])))
+ assignment = Assignment.create(
+ lhs=Reference(fine_mesh),
+ rhs=dig.fine.generate_method_call("get_mesh"),
+ is_pointer=True)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
if Config.get().distributed_memory:
max_halo_f_mesh = (
- self._schedule.symbol_table.find_or_create_tag(
- f"max_halo_depth_mesh_{dig.fine.name}").name)
-
- parent.add(AssignGen(parent, lhs=max_halo_f_mesh,
- rhs=f"{fine_mesh}%get_halo_depth()"))
+ self.symtab.find_or_create_tag(
+ f"max_halo_depth_mesh_{dig.fine.name}"))
+ assignment = Assignment.create(
+ lhs=Reference(max_halo_f_mesh),
+ rhs=Call.create(StructureReference.create(
+ fine_mesh, ["get_halo_depth"])))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
if coarse_mesh not in initialised:
initialised.append(coarse_mesh)
- parent.add(
- AssignGen(parent, pointer=True,
- lhs=coarse_mesh,
- rhs="%".join([dig.coarse.proxy_name_indexed,
- dig.coarse.ref_name(),
- "get_mesh()"])))
+ assignment = Assignment.create(
+ lhs=Reference(coarse_mesh),
+ rhs=dig.coarse.generate_method_call("get_mesh"),
+ is_pointer=True)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
if Config.get().distributed_memory:
max_halo_c_mesh = (
- self._schedule.symbol_table.find_or_create_tag(
- f"max_halo_depth_mesh_{dig.coarse.name}").name)
- parent.add(AssignGen(
- parent, lhs=max_halo_c_mesh,
- rhs=f"{coarse_mesh}%get_halo_depth()"))
+ self.symtab.find_or_create_tag(
+ f"max_halo_depth_mesh_{dig.coarse.name}"))
+ assignment = Assignment.create(
+ lhs=Reference(max_halo_c_mesh),
+ rhs=Call.create(StructureReference.create(
+ coarse_mesh, ["get_halo_depth"])))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
# We also need a pointer to the mesh map which we get from
# the coarse mesh
if dig.mmap not in initialised:
initialised.append(dig.mmap)
- parent.add(
- AssignGen(parent, pointer=True,
- lhs=dig.mmap,
- rhs=f"{coarse_mesh}%get_mesh_map({fine_mesh})"))
+ digmmap = self.symtab.lookup(dig.mmap)
+ assignment = Assignment.create(
+ lhs=Reference(digmmap),
+ rhs=Call.create(StructureReference.create(
+ coarse_mesh, ["get_mesh_map"]),
+ arguments=[Reference(fine_mesh)]),
+ is_pointer=True)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
# Cell map. This is obtained from the mesh map.
if dig.cell_map not in initialised:
initialised.append(dig.cell_map)
- parent.add(
- AssignGen(parent, pointer=True, lhs=dig.cell_map,
- rhs=dig.mmap+"%get_whole_cell_map()"))
+ digcellmap = self.symtab.lookup(dig.cell_map)
+ assignment = Assignment.create(
+ lhs=Reference(digcellmap),
+ rhs=Call.create(StructureReference.create(
+ digmmap, ["get_whole_cell_map"])),
+ is_pointer=True)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
# Number of cells in the fine mesh
if dig.ncell_fine not in initialised:
initialised.append(dig.ncell_fine)
+ digncellfine = self.symtab.lookup(dig.ncell_fine)
if Config.get().distributed_memory:
# TODO this hardwired depth of 2 will need changing in
# order to support redundant computation
- parent.add(
- AssignGen(parent, lhs=dig.ncell_fine,
- rhs=(fine_mesh+"%get_last_halo_cell"
- "(depth=2)")))
+ assignment = Assignment.create(
+ lhs=Reference(digncellfine),
+ rhs=Call.create(StructureReference.create(
+ fine_mesh, ["get_last_halo_cell"])))
+ assignment.rhs.append_named_arg("depth",
+ Literal("2", INTEGER_TYPE))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
else:
- parent.add(
- AssignGen(parent, lhs=dig.ncell_fine,
- rhs="%".join([dig.fine.proxy_name,
- dig.fine.ref_name(),
- "get_ncell()"])))
+ assignment = Assignment.create(
+ lhs=Reference(digncellfine),
+ rhs=dig.fine.generate_method_call("get_ncell"))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
# Number of fine cells per coarse cell in x.
if dig.ncellpercellx not in initialised:
initialised.append(dig.ncellpercellx)
- parent.add(
- AssignGen(parent, lhs=dig.ncellpercellx,
- rhs=dig.mmap +
- "%get_ntarget_cells_per_source_x()"))
+ digncellpercellx = self.symtab.lookup(dig.ncellpercellx)
+ assignment = Assignment.create(
+ lhs=Reference(digncellpercellx),
+ rhs=Call.create(StructureReference.create(
+ digmmap, ["get_ntarget_cells_per_source_x"])))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
# Number of fine cells per coarse cell in y.
if dig.ncellpercelly not in initialised:
initialised.append(dig.ncellpercelly)
- parent.add(
- AssignGen(parent, lhs=dig.ncellpercelly,
- rhs=dig.mmap +
- "%get_ntarget_cells_per_source_y()"))
+ digncellpercelly = self.symtab.lookup(dig.ncellpercelly)
+ assignment = Assignment.create(
+ lhs=Reference(digncellpercelly),
+ rhs=Call.create(StructureReference.create(
+ digmmap, ["get_ntarget_cells_per_source_y"])))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
# Colour map for the coarse mesh (if required)
if dig.colourmap_symbol:
# Number of colours
- parent.add(AssignGen(parent, lhs=dig.ncolours_var_symbol.name,
- rhs=coarse_mesh + "%get_ncolours()"))
+ assignment = Assignment.create(
+ lhs=Reference(dig.ncolours_var_symbol),
+ rhs=Call.create(StructureReference.create(
+ coarse_mesh, ["get_ncolours"])))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
# Colour map itself
- parent.add(AssignGen(parent, lhs=dig.colourmap_symbol.name,
- pointer=True,
- rhs=coarse_mesh + "%get_colour_map()"))
+ assignment = Assignment.create(
+ lhs=Reference(dig.colourmap_symbol),
+ rhs=Call.create(StructureReference.create(
+ coarse_mesh, ["get_colour_map"])),
+ is_pointer=True)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
# Last halo/edge cell per colour.
sym = dig.last_cell_var_symbol
if len(sym.datatype.shape) == 2:
# Array is 2D so is a halo access.
- name = "%get_last_halo_cell_all_colours()"
+ name = "get_last_halo_cell_all_colours"
else:
# Array is just 1D so go to the last edge cell.
- name = "%get_last_edge_cell_all_colours()"
- parent.add(AssignGen(parent, lhs=sym.name,
- rhs=coarse_mesh + name))
+ name = "get_last_edge_cell_all_colours"
+ assignment = Assignment.create(
+ lhs=Reference(sym),
+ rhs=Call.create(StructureReference.create(
+ coarse_mesh, [name])))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
+ if cursor != comment_cursor:
+ self._invoke.schedule[comment_cursor].preceding_comment = (
+ "Look-up mesh objects and loop limits for inter-grid kernels")
+
+ return cursor
@property
def intergrid_kernels(self):
@@ -2316,7 +2351,7 @@ def intergrid_kernels(self):
:rtype: list[:py:class:`psyclone.dynamo3p0.DynInterGrid`]
'''
intergrids = []
- for call in self._schedule.coded_kernels():
+ for call in self._invoke.schedule.coded_kernels():
if call.is_intergrid:
intergrids.append(call._intergrid_ref)
return intergrids
@@ -2343,25 +2378,51 @@ def __init__(self, fine_arg, coarse_arg):
# Generate name for inter-mesh map
base_mmap_name = f"mmap_{fine_arg.name}_{coarse_arg.name}"
- self.mmap = symtab.find_or_create_tag(base_mmap_name).name
+ self.mmap = symtab.find_or_create(
+ base_mmap_name, tag=base_mmap_name,
+ symbol_type=DataSymbol,
+ datatype=UnsupportedFortranType(
+ f"type(mesh_map_type), pointer :: {base_mmap_name}"
+ f" => null()")
+ ).name
# Generate name for ncell variables
name = f"ncell_{fine_arg.name}"
- self.ncell_fine = symtab.find_or_create_integer_symbol(
- name, tag=name).name
+ self.ncell_fine = symtab.find_or_create(
+ name, tag=name,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ ).name
# No. of fine cells per coarse cell in x
name = f"ncpc_{fine_arg.name}_{coarse_arg.name}_x"
- self.ncellpercellx = symtab.find_or_create_integer_symbol(
- name, tag=name).name
+ self.ncellpercellx = symtab.find_or_create(
+ name, tag=name,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ ).name
# No. of fine cells per coarse cell in y
name = f"ncpc_{fine_arg.name}_{coarse_arg.name}_y"
- self.ncellpercelly = symtab.find_or_create_integer_symbol(
- name, tag=name).name
+ self.ncellpercelly = symtab.find_or_create(
+ name, tag=name,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ ).name
# Name for cell map
base_name = "cell_map_" + coarse_arg.name
- sym = symtab.find_or_create_array(base_name, 3,
- ScalarType.Intrinsic.INTEGER,
- tag=base_name)
+ ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2)
+ sym = symtab.find_or_create(
+ base_name,
+ symbol_type=DataSymbol,
+ datatype=UnsupportedFortranType(
+ f"integer(kind=i_def), pointer :: {base_name}"
+ f"(:,:,:) => null()",
+ partial_datatype=ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*3)
+ ))
+
self.cell_map = sym.name
# We have no colourmap information when first created
@@ -2454,7 +2515,7 @@ def __init__(self, node):
# DynKernelArgument) tuples.
self._eval_targets = OrderedDict()
- for call in self._calls:
+ for call in self.kernel_calls:
if isinstance(call, LFRicBuiltIn) or not call.eval_shapes:
# Skip this kernel if it doesn't require basis/diff basis fns
@@ -2655,19 +2716,16 @@ def _setup_basis_fns_for_call(self, call):
diff_entry["type"] = "diff-basis"
self._basis_fns.append(diff_entry)
- def _stub_declarations(self, parent):
+ def stub_declarations(self):
'''
Insert the variable declarations required by the basis functions into
the Kernel stub.
-
- :param parent: the f2pygen node representing the Kernel stub.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ Note that argument order is redefined later by ArgOrdering.
:raises InternalError: if an unsupported quadrature shape is found.
'''
- api_config = Config.get().api_conf("lfric")
-
+ super().stub_declarations()
if not self._qr_vars and not self._eval_targets:
return
@@ -2678,94 +2736,155 @@ def _stub_declarations(self, parent):
# Get the lists of dimensioning variables and basis arrays
var_dims, basis_arrays = self._basis_fn_declns()
- if var_dims:
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in", entity_decls=var_dims))
+ for var in var_dims:
+ arg = self.symtab.find_or_create(
+ var, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ if arg not in self.symtab.argument_list:
+ arg.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(arg)
+
for basis in basis_arrays:
- parent.add(DeclGen(parent, datatype="real",
- kind=api_config.default_kind["real"],
- intent="in",
- dimension=",".join(basis_arrays[basis]),
- entity_decls=[basis]))
+ dims = []
+ for value in basis_arrays[basis]:
+ try:
+ dims.append(Literal(value, INTEGER_TYPE))
+ except ValueError:
+ dims.append(Reference(self.symtab.find_or_create(value)))
+ arg = self.symtab.find_or_create(
+ basis, symbol_type=DataSymbol,
+ datatype=ArrayType(LFRicTypes("LFRicRealScalarDataType")(),
+ dims))
+ arg.interface = ArgumentInterface(ArgumentInterface.Access.READ)
+ self.symtab.append_argument(arg)
const = LFRicConstants()
for shape in self._qr_vars:
qr_name = "_qr_" + shape.split("_")[-1]
- if shape == "gh_quadrature_xyoz":
- datatype = const.QUADRATURE_TYPE_MAP[shape]["intrinsic"]
- kind = const.QUADRATURE_TYPE_MAP[shape]["kind"]
- parent.add(DeclGen(
- parent, datatype=datatype, kind=kind,
- intent="in", dimension="np_xy"+qr_name,
- entity_decls=["weights_xy"+qr_name]))
- parent.add(DeclGen(
- parent, datatype=datatype, kind=kind,
- intent="in", dimension="np_z"+qr_name,
- entity_decls=["weights_z"+qr_name]))
- elif shape == "gh_quadrature_face":
- parent.add(DeclGen(
- parent,
- datatype=const.QUADRATURE_TYPE_MAP[shape]["intrinsic"],
- kind=const.QUADRATURE_TYPE_MAP[shape]["kind"], intent="in",
- dimension=",".join(["np_xyz"+qr_name, "nfaces"+qr_name]),
- entity_decls=["weights_xyz"+qr_name]))
- elif shape == "gh_quadrature_edge":
- parent.add(DeclGen(
- parent,
- datatype=const.QUADRATURE_TYPE_MAP[shape]["intrinsic"],
- kind=const.QUADRATURE_TYPE_MAP[shape]["kind"], intent="in",
- dimension=",".join(["np_xyz"+qr_name, "nedges"+qr_name]),
- entity_decls=["weights_xyz"+qr_name]))
- else:
+ # Create the PSyIR intrinsic DataType
+ if shape not in const.QUADRATURE_TYPE_MAP:
raise InternalError(
f"Quadrature shapes other than {supported_shapes} are not "
f"yet supported - got: '{shape}'")
+ kind_sym = self.symtab.find_or_create(
+ const.QUADRATURE_TYPE_MAP[shape]["kind"],
+ symbol_type=DataSymbol, datatype=UnresolvedType(),
+ interface=ImportInterface(
+ self.symtab.lookup("constants_mod")))
+
+ # All quatratures are REAL
+ intr_type = ScalarType(ScalarType.Intrinsic.REAL, kind_sym)
- def _invoke_declarations(self, parent):
+ if shape == "gh_quadrature_xyoz":
+ dim = self.symtab.find_or_create(
+ "np_xy"+qr_name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ sym = self.symtab.find_or_create(
+ "weights_xy"+qr_name, symbol_type=DataSymbol,
+ datatype=ArrayType(intr_type, [Reference(dim)]))
+ sym.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(sym)
+ dim = self.symtab.find_or_create(
+ "np_z"+qr_name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ sym = self.symtab.find_or_create(
+ "weights_z"+qr_name, symbol_type=DataSymbol,
+ datatype=ArrayType(intr_type, [Reference(dim)]))
+ sym.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(sym)
+ elif shape == "gh_quadrature_face":
+ dim1 = self.symtab.find_or_create(
+ "np_xyz"+qr_name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ dim2 = self.symtab.find_or_create(
+ "nfaces"+qr_name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ sym = self.symtab.find_or_create(
+ "weights_xyz"+qr_name, symbol_type=DataSymbol,
+ datatype=ArrayType(intr_type, [Reference(dim1),
+ Reference(dim2)]))
+ sym.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(sym)
+ elif shape == "gh_quadrature_edge":
+ dim1 = self.symtab.find_or_create(
+ "np_xyz"+qr_name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ dim2 = self.symtab.find_or_create(
+ "nedges"+qr_name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ sym = self.symtab.find_or_create(
+ "weights_xyz"+qr_name, symbol_type=DataSymbol,
+ datatype=ArrayType(intr_type, [Reference(dim1),
+ Reference(dim2)]))
+ sym.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(sym)
+
+ def invoke_declarations(self):
'''
Add basis-function declarations to the PSy layer.
- :param parent: f2pygen node represening the PSy-layer routine.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
'''
- # Create a single declaration for each quadrature type
+ super().invoke_declarations()
const = LFRicConstants()
- for shape in const.VALID_QUADRATURE_SHAPES:
- if shape in self._qr_vars and self._qr_vars[shape]:
- # The PSy-layer routine is passed objects of
- # quadrature_* type
- parent.add(
- TypeDeclGen(parent,
- datatype=const.
- QUADRATURE_TYPE_MAP[shape]["type"],
- entity_decls=self._qr_vars[shape],
- intent="in"))
- # For each of these we'll need a corresponding proxy, use
- # the symbol_table to avoid clashes...
- var_names = []
- for var in self._qr_vars[shape]:
- var_names.append(
- self._symbol_table.find_or_create_tag(var+"_proxy")
- .name)
- parent.add(
- TypeDeclGen(
- parent,
- datatype=const.
- QUADRATURE_TYPE_MAP[shape]["proxy_type"],
- entity_decls=var_names))
-
- def initialise(self, parent):
+
+ # We need BASIS and/or DIFF_BASIS if any kernel requires quadrature
+ # or an evaluator
+ if self._qr_vars or self._eval_targets:
+ module = self.symtab.find_or_create(
+ const.FUNCTION_SPACE_TYPE_MAP["function_space"]["module"],
+ symbol_type=ContainerSymbol)
+ self.symtab.find_or_create(
+ "BASIS", symbol_type=DataSymbol, datatype=UnresolvedType(),
+ interface=ImportInterface(module))
+ self.symtab.find_or_create(
+ "DIFF_BASIS", symbol_type=DataSymbol,
+ datatype=UnresolvedType(), interface=ImportInterface(module))
+
+ if self._qr_vars:
+ # Look-up the module- and type-names from the QUADRATURE_TYPE_MAP
+ for shp in self._qr_vars:
+ quad_map = const.QUADRATURE_TYPE_MAP[shp]
+ module = self.symtab.find_or_create(
+ quad_map["module"],
+ symbol_type=ContainerSymbol)
+ self.symtab.find_or_create(
+ quad_map["type"], symbol_type=DataTypeSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(module))
+ self.symtab.find_or_create(
+ quad_map["proxy_type"], symbol_type=DataTypeSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(module))
+
+ for shape in self._qr_vars.keys():
+ # The PSy-layer routine is passed objects of
+ # quadrature_* type
+ dt_symbol = self.symtab.lookup(
+ const.QUADRATURE_TYPE_MAP[shape]["type"])
+ for name in self._qr_vars[shape]:
+ new_arg = self.symtab.find_or_create(
+ name, symbol_type=DataSymbol, datatype=dt_symbol,
+ )
+ new_arg.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(new_arg)
+
+ def initialise(self, cursor):
'''
Create the declarations and assignments required for the
basis-functions required by an invoke. These are added as children
of the supplied parent node in the AST.
- :param parent: the node in the f2pygen AST that will be the
- parent of all of the declarations and assignments.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param int cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
+ :rtype: int
:raises InternalError: if an invalid entry is encountered in the \
self._basis_fns list.
@@ -2773,79 +2892,82 @@ def initialise(self, parent):
# pylint: disable=too-many-branches, too-many-locals
api_config = Config.get().api_conf("lfric")
const = LFRicConstants()
- basis_declarations = []
# We need BASIS and/or DIFF_BASIS if any kernel requires quadrature
# or an evaluator
if self._qr_vars or self._eval_targets:
- parent.add(
- UseGen(parent, name=const.
- FUNCTION_SPACE_TYPE_MAP["function_space"]["module"],
- only=True, funcnames=["BASIS", "DIFF_BASIS"]))
+ module = self.symtab.find_or_create(
+ const.FUNCTION_SPACE_TYPE_MAP["function_space"]["module"],
+ symbol_type=ContainerSymbol)
+ self.symtab.find_or_create(
+ "BASIS", symbol_type=DataSymbol, datatype=UnresolvedType(),
+ interface=ImportInterface(module))
+ self.symtab.find_or_create(
+ "DIFF_BASIS", symbol_type=DataSymbol,
+ datatype=UnresolvedType(), interface=ImportInterface(module))
if self._qr_vars:
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " Look-up quadrature variables"))
- parent.add(CommentGen(parent, ""))
-
+ init_cursor = cursor
# Look-up the module- and type-names from the QUADRATURE_TYPE_MAP
for shp in self._qr_vars:
quad_map = const.QUADRATURE_TYPE_MAP[shp]
- parent.add(UseGen(parent,
- name=quad_map["module"],
- only=True,
- funcnames=[quad_map["type"],
- quad_map["proxy_type"]]))
- self._initialise_xyz_qr(parent)
- self._initialise_xyoz_qr(parent)
- self._initialise_xoyoz_qr(parent)
- self._initialise_face_or_edge_qr(parent, "face")
- self._initialise_face_or_edge_qr(parent, "edge")
-
- if self._eval_targets:
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent,
- " Initialise evaluator-related quantities "
- "for the target function spaces"))
- parent.add(CommentGen(parent, ""))
-
+ module = self.symtab.find_or_create(
+ quad_map["module"],
+ symbol_type=ContainerSymbol)
+ symbol = self.symtab.lookup(quad_map["type"])
+ symbol.interface = ImportInterface(module)
+ symbol = self.symtab.lookup(quad_map["proxy_type"])
+ symbol.interface = ImportInterface(module)
+
+ cursor = self._initialise_xyz_qr(cursor)
+ cursor = self._initialise_xyoz_qr(cursor)
+ cursor = self._initialise_xoyoz_qr(cursor)
+ cursor = self._initialise_face_or_edge_qr(cursor, "face")
+ cursor = self._initialise_face_or_edge_qr(cursor, "edge")
+
+ if init_cursor < cursor:
+ self._invoke.schedule[init_cursor].preceding_comment = (
+ "Look-up quadrature variables")
+
+ first = True
for (fspace, arg) in self._eval_targets.values():
# We need the list of nodes for each unique FS upon which we need
# to evaluate basis/diff-basis functions
nodes_name = "nodes_" + fspace.mangled_name
- parent.add(AssignGen(
- parent, lhs=nodes_name,
- rhs="%".join([arg.proxy_name_indexed, arg.ref_name(fspace),
- "get_nodes()"]),
- pointer=True))
- my_kind = api_config.default_kind["real"]
- parent.add(DeclGen(parent, datatype="real",
- kind=my_kind,
- pointer=True,
- entity_decls=[nodes_name+"(:,:) => null()"]))
- const_mod = const.UTILITIES_MOD_MAP["constants"]["module"]
- const_mod_uses = self._invoke.invokes.psy. \
- infrastructure_modules[const_mod]
- # Record that we will need to import the kind for a
- # pointer declaration (associated with a function
- # space) from the appropriate infrastructure module
- const_mod_uses.add(my_kind)
-
- if self._basis_fns:
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " Allocate basis/diff-basis arrays"))
- parent.add(CommentGen(parent, ""))
-
+ kind = api_config.default_kind["real"]
+ symbol = self.symtab.new_symbol(
+ nodes_name, symbol_type=DataSymbol,
+ datatype=UnsupportedFortranType(
+ f"real(kind={kind}), pointer :: {nodes_name}"
+ f"(:,:) => null()",
+ partial_datatype=ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2)
+ ))
+ assignment = Assignment.create(
+ lhs=Reference(symbol),
+ rhs=arg.generate_method_call(
+ "get_nodes", function_space=fspace),
+ is_pointer=True)
+ if first:
+ assignment.preceding_comment = (
+ "Initialise evaluator-related quantities for the target "
+ "function spaces")
+ first = False
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
+
+ init_cursor = cursor
var_dim_list = []
for basis_fn in self._basis_fns:
# Get the extent of the first dimension of the basis array.
if basis_fn['type'] == "basis":
first_dim = self.basis_first_dim_name(basis_fn["fspace"])
- dim_space = "get_dim_space()"
+ dim_space = "get_dim_space"
elif basis_fn['type'] == "diff-basis":
first_dim = self.diff_basis_first_dim_name(
basis_fn["fspace"])
- dim_space = "get_dim_space_diff()"
+ dim_space = "get_dim_space_diff"
else:
raise InternalError(
f"Unrecognised type of basis function: "
@@ -2854,40 +2976,44 @@ def initialise(self, parent):
if first_dim not in var_dim_list:
var_dim_list.append(first_dim)
- rhs = "%".join(
- [basis_fn["arg"].proxy_name_indexed,
- basis_fn["arg"].ref_name(basis_fn["fspace"]),
- dim_space])
- parent.add(AssignGen(parent, lhs=first_dim, rhs=rhs))
+ symbol = self.symtab.find_or_create(
+ first_dim, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
- var_dims, basis_arrays = self._basis_fn_declns()
+ assignment = Assignment.create(
+ lhs=Reference(symbol),
+ rhs=basis_fn["arg"].generate_method_call(
+ dim_space, function_space=basis_fn["fspace"]))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
- if var_dims:
- # declare dim and diff_dim for all function spaces
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=var_dims))
+ _, basis_arrays = self._basis_fn_declns()
- basis_declarations = []
+ # Allocate basis arrays
for basis in basis_arrays:
- parent.add(
- AllocateGen(parent,
- basis+"("+", ".join(basis_arrays[basis])+")"))
- basis_declarations.append(
- basis+"("+",".join([":"]*len(basis_arrays[basis]))+")")
-
- # declare the basis function arrays
- if basis_declarations:
- my_kind = api_config.default_kind["real"]
- parent.add(DeclGen(parent, datatype="real", kind=my_kind,
- allocatable=True,
- entity_decls=basis_declarations))
- # Default kind (r_def) will always already exist due to
- # arrays associated with gh_shape, so there is no need to
- # declare it here.
+ dims = "("+",".join([":"]*len(basis_arrays[basis]))+")"
+ symbol = self.symtab.find_or_create(
+ basis, symbol_type=DataSymbol, datatype=UnsupportedFortranType(
+ f"real(kind=r_def), allocatable :: {basis}{dims}"
+ ))
+ alloc = IntrinsicCall.create(
+ IntrinsicCall.Intrinsic.ALLOCATE,
+ [ArrayReference.create(
+ symbol,
+ [Reference(self.symtab.find_or_create(
+ bn, symbol_type=DataSymbol,
+ datatype=UnresolvedType()))
+ for bn in basis_arrays[basis]])]
+ )
+ self._invoke.schedule.addchild(alloc, cursor)
+ cursor += 1
# Compute the values for any basis arrays
- self._compute_basis_fns(parent)
+ cursor = self._compute_basis_fns(cursor)
+ if init_cursor < cursor:
+ self._invoke.schedule[init_cursor].preceding_comment = (
+ "Allocate basis/diff-basis arrays")
+ return cursor
def _basis_fn_declns(self):
'''
@@ -3001,104 +3127,123 @@ def _basis_fn_declns(self):
return (var_dim_list, basis_arrays)
- def _initialise_xyz_qr(self, parent):
+ def _initialise_xyz_qr(self, cursor):
'''
Add in the initialisation of variables needed for XYZ
quadrature
- :param parent: the node in the AST representing the PSy subroutine
- in which to insert the initialisation
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param int cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
+ :rtype: int
'''
# pylint: disable=unused-argument
# This shape is not yet supported so we do nothing
- return
+ return cursor
- def _initialise_xyoz_qr(self, parent):
+ def _initialise_xyoz_qr(self, cursor):
'''
Add in the initialisation of variables needed for XYoZ
quadrature
- :param parent: the node in the AST representing the PSy subroutine
- in which to insert the initialisation
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param int cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
+ :rtype: int
'''
- api_config = Config.get().api_conf("lfric")
+ const = LFRicConstants()
if "gh_quadrature_xyoz" not in self._qr_vars:
- return
+ return cursor
for qr_arg_name in self._qr_vars["gh_quadrature_xyoz"]:
# We generate unique names for the integers holding the numbers
# of quadrature points by appending the name of the quadrature
# argument
- parent.add(
- DeclGen(
- parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=[name+"_"+qr_arg_name
- for name in self.qr_dim_vars["xyoz"]]))
- decl_list = [name+"_"+qr_arg_name+"(:) => null()"
- for name in self.qr_weight_vars["xyoz"]]
- const = LFRicConstants()
- datatype = \
+ for name in self.qr_dim_vars["xyoz"]:
+ self.symtab.new_symbol(
+ name+"_"+qr_arg_name, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ dtype = \
const.QUADRATURE_TYPE_MAP["gh_quadrature_xyoz"]["intrinsic"]
kind = const.QUADRATURE_TYPE_MAP["gh_quadrature_xyoz"]["kind"]
- parent.add(
- DeclGen(parent, datatype=datatype, kind=kind,
- pointer=True, entity_decls=decl_list))
- const_mod = const.UTILITIES_MOD_MAP["constants"]["module"]
- const_mod_uses = self._invoke.invokes.psy. \
- infrastructure_modules[const_mod]
- # Record that we will need to import the kind for a
- # declaration (associated with quadrature) from
- # the appropriate infrastructure module
- const_mod_uses.add(kind)
+ for name in self.qr_weight_vars["xyoz"]:
+ self.symtab.find_or_create(
+ name+"_"+qr_arg_name, symbol_type=DataSymbol,
+ datatype=UnsupportedFortranType(
+ f"{dtype}(kind={kind}), pointer :: "
+ f"{name}_{qr_arg_name}(:) => null()"))
# Get the quadrature proxy
- proxy_name = qr_arg_name + "_proxy"
- parent.add(
- AssignGen(parent, lhs=proxy_name,
- rhs=qr_arg_name+"%"+"get_quadrature_proxy()"))
+ dtp_symbol = self.symtab.lookup(
+ const.QUADRATURE_TYPE_MAP["gh_quadrature_xyoz"]["proxy_type"])
+ proxy_symbol = self.symtab.find_or_create(
+ qr_arg_name+"_proxy", symbol_type=DataSymbol,
+ datatype=dtp_symbol)
+ symbol = self.symtab.lookup(qr_arg_name)
+
+ assignment = Assignment.create(
+ lhs=Reference(proxy_symbol),
+ rhs=Call.create(
+ StructureReference.create(
+ symbol, ['get_quadrature_proxy'])))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
+
# Number of points in each dimension
for qr_var in self.qr_dim_vars["xyoz"]:
- parent.add(
- AssignGen(parent, lhs=qr_var+"_"+qr_arg_name,
- rhs=proxy_name+"%"+qr_var))
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(self.symtab.lookup(
+ qr_var+"_"+qr_arg_name)),
+ rhs=StructureReference.create(
+ proxy_symbol, [qr_var])),
+ cursor)
+ cursor += 1
+
# Pointers to the weights arrays
for qr_var in self.qr_weight_vars["xyoz"]:
- parent.add(
- AssignGen(parent, pointer=True,
- lhs=qr_var+"_"+qr_arg_name,
- rhs=proxy_name+"%"+qr_var))
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(self.symtab.lookup(
+ qr_var+"_"+qr_arg_name)),
+ rhs=StructureReference.create(
+ proxy_symbol, [qr_var]),
+ is_pointer=True),
+ cursor)
+ cursor += 1
+
+ return cursor
- def _initialise_xoyoz_qr(self, parent):
+ def _initialise_xoyoz_qr(self, cursor):
'''
Add in the initialisation of variables needed for XoYoZ
quadrature.
- :param parent: the node in the AST representing the PSy subroutine \
- in which to insert the initialisation.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param int cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
+ :rtype: int
'''
# pylint: disable=unused-argument
# This shape is not yet supported so we do nothing
- return
+ return cursor
- def _initialise_face_or_edge_qr(self, parent, qr_type):
+ def _initialise_face_or_edge_qr(self, cursor, qr_type):
'''
Add in the initialisation of variables needed for face or edge
quadrature.
- :param parent: the node in the AST representing the PSy subroutine \
- in which to insert the initialisation.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param int cursor: position where to add the next initialisation
+ statements.
:param str qr_type: whether to generate initialisation code for \
"face" or "edge" quadrature.
+ :returns: Updated cursor value.
+ :rtype: int
:raises InternalError: if `qr_type` is not "face" or "edge".
@@ -3111,84 +3256,96 @@ def _initialise_face_or_edge_qr(self, parent, qr_type):
quadrature_name = f"gh_quadrature_{qr_type}"
if quadrature_name not in self._qr_vars:
- return
-
- api_config = Config.get().api_conf("lfric")
- symbol_table = self._symbol_table
+ return cursor
for qr_arg_name in self._qr_vars[quadrature_name]:
+
+ arg_symbol = self.symtab.lookup(qr_arg_name)
+ arg_symbol.interface = ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ self.symtab.append_argument(arg_symbol)
+
# We generate unique names for the integers holding the numbers
# of quadrature points by appending the name of the quadrature
# argument
- decl_list = [
- symbol_table.find_or_create_integer_symbol(
- name+"_"+qr_arg_name, tag=name+"_"+qr_arg_name).name
- for name in self.qr_dim_vars[qr_type]]
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=decl_list))
-
- names = [f"{name}_{qr_arg_name}"
- for name in self.qr_weight_vars[qr_type]]
- decl_list = [
- symbol_table.find_or_create_array(name, 2,
- ScalarType.Intrinsic.REAL,
- tag=name).name
- + "(:,:) => null()" for name in names]
+ for name in self.qr_dim_vars[qr_type]:
+ self.symtab.find_or_create(
+ name+"_"+qr_arg_name, tag=name+"_"+qr_arg_name,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+
+ array_type = ArrayType(
+ LFRicTypes("LFRicRealScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2)
+ for name in self.qr_weight_vars[qr_type]:
+ self.symtab.find_or_create(
+ f"{name}_{qr_arg_name}", symbol_type=DataSymbol,
+ datatype=UnsupportedFortranType(
+ f"real(kind=r_def), pointer, dimension(:,:) :: "
+ f"{name}_{qr_arg_name} => null()\n",
+ partial_datatype=array_type
+ ),
+ tag=f"{name}_{qr_arg_name}")
const = LFRicConstants()
- datatype = const.QUADRATURE_TYPE_MAP[quadrature_name]["intrinsic"]
- kind = const.QUADRATURE_TYPE_MAP[quadrature_name]["kind"]
- parent.add(
- DeclGen(parent, datatype=datatype, pointer=True, kind=kind,
- entity_decls=decl_list))
- const_mod = const.UTILITIES_MOD_MAP["constants"]["module"]
- const_mod_uses = self._invoke.invokes.psy. \
- infrastructure_modules[const_mod]
- # Record that we will need to import the kind for a
- # declaration (associated with quadrature) from the
- # appropriate infrastructure module
- const_mod_uses.add(kind)
+
# Get the quadrature proxy
- proxy_name = symbol_table.find_or_create_tag(
- qr_arg_name+"_proxy").name
- parent.add(
- AssignGen(parent, lhs=proxy_name,
- rhs=qr_arg_name+"%"+"get_quadrature_proxy()"))
+ ptype = self.symtab.lookup(
+ const.QUADRATURE_TYPE_MAP[quadrature_name]["proxy_type"])
+
+ proxy_sym = self.symtab.find_or_create_tag(
+ qr_arg_name+"_proxy", symbol_type=DataSymbol, datatype=ptype)
+ call = Call.create(
+ StructureReference.create(
+ self.symtab.lookup(qr_arg_name),
+ ["get_quadrature_proxy"]))
+ assignment = Assignment.create(
+ lhs=Reference(proxy_sym),
+ rhs=call)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
+
# The dimensioning variables required for this quadrature
# (e.g. nedges/nfaces, np_xyz)
for qr_var in self.qr_dim_vars[qr_type]:
- parent.add(
- AssignGen(parent, lhs=qr_var+"_"+qr_arg_name,
- rhs=proxy_name+"%"+qr_var))
+ qr_sym = self.symtab.lookup(qr_var+'_'+qr_arg_name)
+ assignment = Assignment.create(
+ lhs=Reference(qr_sym),
+ rhs=StructureReference.create(proxy_sym, [qr_var]))
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
+
# Pointers to the weights arrays
for qr_var in self.qr_weight_vars[qr_type]:
- parent.add(
- AssignGen(parent, pointer=True,
- lhs=qr_var+"_"+qr_arg_name,
- rhs=proxy_name+"%"+qr_var))
+ qr_sym = self.symtab.lookup(qr_var+'_'+qr_arg_name)
+ assignment = Assignment.create(
+ lhs=Reference(qr_sym),
+ rhs=StructureReference.create(
+ proxy_sym, [qr_var]),
+ is_pointer=True)
+ self._invoke.schedule.addchild(assignment, cursor)
+ cursor += 1
- def _compute_basis_fns(self, parent):
+ return cursor
+
+ def _compute_basis_fns(self, cursor):
'''
Generates the necessary Fortran to compute the values of
any basis/diff-basis arrays required
- :param parent: Node in the f2pygen AST which will be the parent
- of the assignments created in this routine
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ :param int cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
+ :rtype: int
'''
# pylint: disable=too-many-locals
const = LFRicConstants()
- api_config = Config.get().api_conf("lfric")
loop_var_list = set()
op_name_list = []
- # add calls to compute the values of any basis arrays
- if self._basis_fns:
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " Compute basis/diff-basis arrays"))
- parent.add(CommentGen(parent, ""))
+ # add calls to compute the values of any basis arrays
+ first = True
for basis_fn in self._basis_fns:
# Currently there are only two possible types of basis function
@@ -3217,15 +3374,24 @@ def _compute_basis_fns(self, parent):
op_name_list.append(op_name)
# Create the argument list
- args = [basis_type, basis_fn["arg"].proxy_name_indexed + "%" +
- basis_fn["arg"].ref_name(basis_fn["fspace"]),
- first_dim, basis_fn["fspace"].ndf_name, op_name]
+ args = [Reference(self.symtab.lookup(basis_type)),
+ basis_fn["arg"].generate_accessor(basis_fn["fspace"]),
+ Reference(self.symtab.lookup(first_dim)),
+ Reference(self.symtab.lookup(
+ basis_fn["fspace"].ndf_name)),
+ Reference(self.symtab.lookup(op_name))]
# insert the basis array call
- parent.add(
- CallGen(parent,
- name=basis_fn["qr_var"]+"%compute_function",
- args=args))
+ call = Call.create(
+ StructureReference.create(
+ self.symtab.lookup(basis_fn["qr_var"]),
+ ["compute_function"]),
+ args)
+ if first:
+ call.preceding_comment = "Compute basis/diff-basis arrays"
+ first = False
+ self._invoke.schedule.addchild(call, cursor)
+ cursor += 1
elif basis_fn["shape"].lower() == "gh_evaluator":
# We have an evaluator. We may need this on more than one
# function space.
@@ -3241,50 +3407,68 @@ def _compute_basis_fns(self, parent):
loop_var_list.add(nodal_loop_var)
# Loop over dofs of target function space
- nodal_dof_loop = DoGen(
- parent, nodal_loop_var, "1", space.ndf_name)
- parent.add(nodal_dof_loop)
+ symbol = self.symtab.find_or_create_tag(
+ nodal_loop_var,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ loop = Loop.create(
+ symbol, Literal('1', INTEGER_TYPE),
+ Reference(self.symtab.lookup(space.ndf_name)),
+ Literal('1', INTEGER_TYPE), [])
+ if first:
+ loop.preceding_comment = (
+ "Compute basis/diff-basis arrays")
+ first = False
+ self._invoke.schedule.addchild(loop, cursor)
+ cursor += 1
dof_loop_var = "df_" + basis_fn["fspace"].mangled_name
loop_var_list.add(dof_loop_var)
- dof_loop = DoGen(nodal_dof_loop, dof_loop_var,
- "1", basis_fn["fspace"].ndf_name)
- nodal_dof_loop.add(dof_loop)
- lhs = op_name + "(:," + "df_" + \
- basis_fn["fspace"].mangled_name + "," + "df_nodal)"
- rhs = (f"{basis_fn['arg'].proxy_name_indexed}%"
- f"{basis_fn['arg'].ref_name(basis_fn['fspace'])}%"
- f"call_function({basis_type},{dof_loop_var},nodes_"
- f"{space.mangled_name}(:,{nodal_loop_var}))")
- dof_loop.add(AssignGen(dof_loop, lhs=lhs, rhs=rhs))
+ symbol = self.symtab.find_or_create_tag(
+ dof_loop_var,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")())
+ inner_loop = Loop.create(
+ symbol, Literal('1', INTEGER_TYPE),
+ Reference(self.symtab.lookup(
+ basis_fn["fspace"].ndf_name)),
+ Literal('1', INTEGER_TYPE), [])
+ loop.loop_body.addchild(inner_loop)
+
+ symbol = self.symtab.lookup(op_name)
+ rhs = basis_fn['arg'].generate_method_call(
+ "call_function", function_space=basis_fn['fspace'])
+ rhs.addchild(Reference(self.symtab.lookup(basis_type)))
+ rhs.addchild(Reference(self.symtab.lookup(dof_loop_var)))
+ rhs.addchild(ArrayReference.create(
+ self.symtab.lookup(f"nodes_{space.mangled_name}"),
+ [":", Reference(self.symtab.lookup(
+ nodal_loop_var))]))
+ inner_loop.loop_body.addchild(
+ Assignment.create(
+ lhs=ArrayReference.create(symbol, [
+ ":",
+ Reference(self.symtab.lookup(
+ f"df_{basis_fn['fspace'].mangled_name}")),
+ Reference(self.symtab.lookup("df_nodal"))
+ ]),
+ rhs=rhs))
else:
raise InternalError(
f"Unrecognised shape '{basis_fn['''shape''']}' specified "
f"for basis function. Should be one of: "
f"{const.VALID_EVALUATOR_SHAPES}")
- if loop_var_list:
- # Declare any loop variables
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- entity_decls=sorted(loop_var_list)))
+ return cursor
- def deallocate(self, parent):
+ def deallocate(self):
'''
- Add code to deallocate all basis/diff-basis function arrays
-
- :param parent: node in the f2pygen AST to which the deallocate \
- calls will be added.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
+ Add code (at the end of the Invoke Schedule) to deallocate all
+ basis/diff-basis function arrays.
- :raises InternalError: if an unrecognised type of basis function \
+ :raises InternalError: if an unrecognised type of basis function
is encountered.
'''
- if self._basis_fns:
- # deallocate all allocated basis function arrays
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " Deallocate basis arrays"))
- parent.add(CommentGen(parent, ""))
func_space_var_names = set()
for basis_fn in self._basis_fns:
@@ -3305,9 +3489,17 @@ def deallocate(self, parent):
on_space=fspace)
func_space_var_names.add(op_name)
+ first = True
if func_space_var_names:
# add the required deallocate call
- parent.add(DeallocateGen(parent, sorted(func_space_var_names)))
+ dealloc = IntrinsicCall.create(
+ IntrinsicCall.Intrinsic.DEALLOCATE,
+ [Reference(self.symtab.lookup(name)) for name in
+ sorted(func_space_var_names)]
+ )
+ if first:
+ dealloc.preceding_comment = "Deallocate basis arrays"
+ self._invoke.schedule.children.append(dealloc)
class DynBoundaryConditions(LFRicCollection):
@@ -3339,7 +3531,7 @@ def __init__(self, node):
# pylint: disable=import-outside-toplevel
from psyclone.domain.lfric.metadata_to_arguments_rules import (
MetadataToArgumentsRules)
- for call in self._calls:
+ for call in self.kernel_calls:
if MetadataToArgumentsRules.bc_kern_regex.match(call.name):
bc_fs = None
for fspace in call.arguments.unique_fss:
@@ -3363,57 +3555,75 @@ def __init__(self, node):
bc_fs = op_arg.function_space_to
self._boundary_dofs.append(self.BoundaryDofs(op_arg, bc_fs))
- def _invoke_declarations(self, parent):
+ def invoke_declarations(self):
'''
Add declarations for any boundary-dofs arrays required by an Invoke.
- :param parent: node in the PSyIR to which to add declarations.
- :type parent: :py:class:`psyclone.psyir.nodes.Node`
-
'''
+ super().invoke_declarations()
api_config = Config.get().api_conf("lfric")
for dofs in self._boundary_dofs:
name = "boundary_dofs_" + dofs.argument.name
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- pointer=True,
- entity_decls=[name+"(:,:) => null()"]))
-
- def _stub_declarations(self, parent):
+ kind = api_config.default_kind["integer"]
+ dtype = UnsupportedFortranType(
+ f"integer(kind={kind}), pointer "
+ f":: {name}(:,:) => null()",
+ partial_datatype=ArrayType(
+ LFRicTypes("LFRicIntegerScalarDataType")(),
+ [ArrayType.Extent.DEFERRED]*2)
+ )
+ self.symtab.new_symbol(
+ name,
+ symbol_type=DataSymbol,
+ datatype=dtype)
+
+ def stub_declarations(self):
'''
Add declarations for any boundary-dofs arrays required by a kernel.
-
- :param parent: node in the PSyIR to which to add declarations.
- :type parent: :py:class:`psyclone.psyir.nodes.Node`
+ Note that argument order is redefined later by ArgOrdering.
'''
- api_config = Config.get().api_conf("lfric")
-
+ super().stub_declarations()
for dofs in self._boundary_dofs:
name = "boundary_dofs_" + dofs.argument.name
- ndf_name = dofs.function_space.ndf_name
- parent.add(DeclGen(parent, datatype="integer",
- kind=api_config.default_kind["integer"],
- intent="in",
- dimension=",".join([ndf_name, "2"]),
- entity_decls=[name]))
-
- def initialise(self, parent):
+ ndf_name = self.symtab.lookup(dofs.function_space.ndf_name)
+ dtype = ArrayType(
+ LFRicTypes("LFRicIntegerScalarDataType")(),
+ [Reference(ndf_name), Literal("2", INTEGER_TYPE)])
+ new_symbol = self.symtab.new_symbol(
+ name,
+ symbol_type=DataSymbol,
+ datatype=dtype,
+ interface=ArgumentInterface(
+ ArgumentInterface.Access.READ)
+ )
+ self.symtab.append_argument(new_symbol)
+
+ def initialise(self, cursor):
'''
Initialise any boundary-dofs arrays required by an Invoke.
- :param parent: node in PSyIR to which to add declarations.
- :type parent: :py:class:`psyclone.psyir.nodes.Node`
+ :param int cursor: position where to add the next initialisation
+ statements.
+ :returns: Updated cursor value.
+ :rtype: int
'''
for dofs in self._boundary_dofs:
name = "boundary_dofs_" + dofs.argument.name
- parent.add(AssignGen(
- parent, pointer=True, lhs=name,
- rhs="%".join([dofs.argument.proxy_name,
- dofs.argument.ref_name(dofs.function_space),
- "get_boundary_dofs()"])))
+ self._invoke.schedule.addchild(
+ Assignment.create(
+ lhs=Reference(self.symtab.lookup(name)),
+ rhs=dofs.argument.generate_method_call(
+ "get_boundary_dofs",
+ function_space=dofs.function_space),
+ is_pointer=True
+ ),
+ cursor)
+ cursor += 1
+
+ return cursor
class DynGlobalSum(GlobalSum):
@@ -3452,27 +3662,41 @@ def __init__(self, scalar, parent=None):
# Initialise the parent class
super().__init__(scalar, parent=parent)
- def gen_code(self, parent):
+ def lower_to_language_level(self):
'''
- Dynamo-specific code generation for this class.
-
- :param parent: f2pygen node to which to add AST nodes.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
+ :returns: this node lowered to language-level PSyIR.
+ :rtype: :py:class:`psyclone.psyir.nodes.Node`
'''
+
+ # Get the name strings to use
name = self._scalar.name
- # Use InvokeSchedule SymbolTable to share the same symbol for all
- # GlobalSums in the Invoke.
- sum_name = self.ancestor(InvokeSchedule).symbol_table.\
- find_or_create_tag("global_sum").name
- sum_type = self._scalar.data_type
- sum_mod = self._scalar.module_name
- parent.add(UseGen(parent, name=sum_mod, only=True,
- funcnames=[sum_type]))
- parent.add(TypeDeclGen(parent, datatype=sum_type,
- entity_decls=[sum_name]))
- parent.add(AssignGen(parent, lhs=sum_name+"%value", rhs=name))
- parent.add(AssignGen(parent, lhs=name, rhs=sum_name+"%get_sum()"))
+ type_name = self._scalar.data_type
+ mod_name = self._scalar.module_name
+
+ # Get the symbols from the given names
+ symtab = self.ancestor(InvokeSchedule).symbol_table
+ sum_mod = symtab.find_or_create(mod_name, symbol_type=ContainerSymbol)
+ sum_type = symtab.find_or_create(type_name,
+ symbol_type=DataTypeSymbol,
+ datatype=UnresolvedType(),
+ interface=ImportInterface(sum_mod))
+ sum_name = symtab.find_or_create_tag("global_sum",
+ symbol_type=DataSymbol,
+ datatype=sum_type)
+ tmp_var = symtab.lookup(name)
+
+ # Create the assignments
+ assign1 = Assignment.create(
+ lhs=StructureReference.create(sum_name, ["value"]),
+ rhs=Reference(tmp_var)
+ )
+ assign1.preceding_comment = "Perform global sum"
+ self.parent.addchild(assign1, self.position)
+ assign2 = Assignment.create(
+ lhs=Reference(tmp_var),
+ rhs=Call.create(StructureReference.create(sum_name, ["get_sum"]))
+ )
+ return self.replace_with(assign2)
def _create_depth_list(halo_info_list, parent):
@@ -3741,7 +3965,7 @@ def _compute_halo_read_info(self, ignore_hex_dep=False):
raise InternalError(
"Internal logic error. There should be at least one read "
"dependence for a halo exchange.")
- return [HaloReadAccess(read_dependency, self._symbol_table) for
+ return [HaloReadAccess(read_dependency, self._parent) for
read_dependency in read_dependencies]
def _compute_halo_write_info(self):
@@ -3936,24 +4160,12 @@ def node_str(self, colour=True):
'''
_, known = self.required()
runtime_check = not known
- field_id = self._field.name
- if self.vector_index:
- field_id += f"({self.vector_index})"
+ field_id = self._field.name_indexed
return (f"{self.coloured_name(colour)}[field='{field_id}', "
f"type='{self._compute_stencil_type()}', "
f"depth={self._compute_halo_depth().debug_string()}, "
f"check_dirty={runtime_check}]")
- def gen_code(self, parent):
- '''Dynamo specific code generation for this class.
-
- :param parent: an f2pygen object that will be the parent of \
- f2pygen objects created in this method
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
-
- '''
- parent.add(PSyIRGen(parent, self))
-
def lower_to_language_level(self):
'''
:returns: this node lowered to language-level PSyIR.
@@ -3988,6 +4200,7 @@ def lower_to_language_level(self):
else:
haloex = if_body
+ haloex.preceding_comment = self.preceding_comment
self.replace_with(haloex)
return haloex
@@ -4161,6 +4374,8 @@ class HaloDepth():
access that is represented.
:type parent: :py:class:`psyclone.psyir.nodes.Node`
+ :raises TypeError: if the parent argument is not a Node.
+
'''
def __init__(self, parent):
# var_depth is used to store the PSyIR of the expression holding
@@ -4185,6 +4400,11 @@ def __init__(self, parent):
# variables holding the maximum halo depth.
# TODO #2503: This can become invalid if the HaloExchange
# containing this HaloDepth changes its ancestors.
+ if not isinstance(parent, Node):
+ raise TypeError(
+ f"The HaloDepth parent argument must be a Node, but found: "
+ f"{type(parent).__name__}"
+ )
self._parent = parent
@property
@@ -4259,7 +4479,7 @@ def set_by_value(self, max_depth, var_depth, annexed_only, max_depth_m1):
# have to create a fake Assignment and temporarily graft it into the
# tree.
fake_assign = Assignment.create(
- Reference(DataSymbol("tmp", INTEGER_TYPE)), var_depth.detach())
+ Reference(DataSymbol("tmp", INTEGER_TYPE)), var_depth.copy())
sched = self._parent.ancestor(Schedule, include_self=True)
sched.addchild(fake_assign)
@@ -4462,9 +4682,8 @@ class HaloReadAccess(HaloDepth):
:param field: the field for which we want information.
:type field: :py:class:`psyclone.dynamo0p3.DynKernelArgument`
- :param sym_table: the symbol table associated with the scoping region
- that contains this halo access.
- :type sym_table: :py:class:`psyclone.psyir.symbols.SymbolTable`
+ :param parent: the node where this HaloDepth belongs.
+ :type parent: :py:class:`psyclone.psyir.node.Node`
'''
def __init__(self, field, parent=None):
@@ -4877,8 +5096,11 @@ def __init__(self, call, parent_call, check=True):
# symbol_table.
tag = "AlgArgs_" + arg.stencil.extent_arg.text
root = arg.stencil.extent_arg.varname
- new_name = symtab.find_or_create_tag(tag, root).name
- arg.stencil.extent_arg.varname = new_name
+ symbol = symtab.find_or_create_tag(
+ tag, root, symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ )
+ arg.stencil.extent_arg.varname = symbol.name
if arg.descriptor.stencil['type'] == 'xory1d':
# a direction argument has been added
if arg.stencil.direction_arg.varname and \
@@ -4888,9 +5110,12 @@ def __init__(self, call, parent_call, check=True):
# it is unique in the PSy layer
tag = "AlgArgs_" + arg.stencil.direction_arg.text
root = arg.stencil.direction_arg.varname
- new_name = symtab.find_or_create_integer_symbol(
- root, tag=tag).name
- arg.stencil.direction_arg.varname = new_name
+ symbol = symtab.find_or_create_tag(
+ tag, root,
+ symbol_type=DataSymbol,
+ datatype=LFRicTypes("LFRicIntegerScalarDataType")()
+ )
+ arg.stencil.direction_arg.varname = symbol.name
self._dofs = []
@@ -5213,6 +5438,55 @@ def __init__(self, kernel_args, arg_meta_data, arg_info, call, check=True):
# already set up)
self._complete_init(arg_info)
+ def generate_method_call(self, method, function_space=None):
+ '''
+ Generate a PSyIR call to the given method of this object.
+
+ :param str method: name of the method to generate a call to.
+ :param Optional[str] function_space: name of the function space.
+
+ :returns: the generated call.
+ :rtype: :py:class:`psyclone.psyir.nodes.Call`
+ '''
+
+ # Go through invoke.schedule in case the link has bee updated
+ symtab = self._call.ancestor(InvokeSchedule).invoke.schedule\
+ .symbol_table
+ # Use the proxy variable as derived type base
+ symbol = symtab.lookup(self.proxy_name)
+
+ if self._vector_size > 1:
+ # For a field vector, just call the specified method on the first
+ # element
+ return Call.create(ArrayOfStructuresReference.create(
+ symbol, [Literal('1', INTEGER_TYPE)],
+ [self.ref_name(function_space), method]))
+ return Call.create(StructureReference.create(
+ symbol, [self.ref_name(function_space), method]))
+
+ def generate_accessor(self, function_space=None):
+ '''
+ Generate a Reference accessing this object's data.
+
+ :param Optional[str] function_space: name of the function space.
+
+ :returns: the generated Reference.
+ :rtype: :py:class:`psyclone.psyir.nodes.Reference`
+ '''
+
+ # Go through invoke.schedule in case the link has bee updated
+ symtab = self._call.ancestor(InvokeSchedule).invoke\
+ .schedule.symbol_table
+ symbol = symtab.lookup(self.proxy_name)
+
+ if self._vector_size > 1:
+ # For a field vector, access the first element
+ return ArrayOfStructuresReference.create(
+ symbol, [Literal('1', INTEGER_TYPE)],
+ [self.ref_name(function_space)])
+ return StructureReference.create(
+ symbol, [self.ref_name(function_space)])
+
def ref_name(self, function_space=None):
'''
Returns the name used to dereference this type of argument (depends
diff --git a/src/psyclone/f2pygen.py b/src/psyclone/f2pygen.py
deleted file mode 100644
index d772b88e2f..0000000000
--- a/src/psyclone/f2pygen.py
+++ /dev/null
@@ -1,1467 +0,0 @@
-# -----------------------------------------------------------------------------
-# BSD 3-Clause License
-#
-# Copyright (c) 2017-2025 and Technology Facilities Council.
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright notice, this
-# list of conditions and the following disclaimer.
-#
-# * Redistributions in binary form must reproduce the above copyright notice,
-# this list of conditions and the following disclaimer in the documentation
-# and/or other materials provided with the distribution.
-#
-# * Neither the name of the copyright holder nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# -----------------------------------------------------------------------------
-# Authors R. W. Ford, A. R. Porter and S. Siso, STFC Daresbury Lab
-# Modified: A. B. G. Chalk and N. Nobre, STFC Daresbury Lab
-
-''' Fortran code-generation library. This wraps the f2py fortran parser to
- provide routines which can be used to generate fortran code. '''
-
-import abc
-from fparser.common.readfortran import FortranStringReader
-from fparser.common.sourceinfo import FortranFormat
-from fparser.one.statements import Comment, Case
-from fparser.one.block_statements import SelectCase, SelectType, EndSelect
-from fparser.one.parsefortran import FortranParser
-# This alias is useful to refer to parts of fparser.one later but
-# cannot be used for imports (as that involves looking for the
-# specified name in sys.modules).
-from fparser import one as fparser1
-from psyclone.configuration import Config
-from psyclone.errors import InternalError
-
-# Module-wide utility methods
-
-
-def bubble_up_type(obj):
- '''
- Checks whether the supplied object must be bubbled-up (e.g. from
- within DO loops).
-
- :returns: True if the supplied object is of a type which must be \
- bubbled-up and False otherwise.
- '''
- return isinstance(obj, (UseGen, BaseDeclGen))
-
-
-def index_of_object(alist, obj):
- '''Effectively implements list.index(obj) but returns the index of
- the first item in the list that *is* the supplied object (rather than
- comparing values) '''
- for idx, body in enumerate(alist):
- if body is obj:
- return idx
- raise Exception(f"Object {obj} not found in list")
-
-
-# This section subclasses the f2py comment class so that we can
-# reason about directives
-
-
-class Directive(Comment):
- '''
- Base class for directives so we can reason about them when walking
- the tree. Sub-classes the fparser1 Comment class.
-
- :param root: the parent node in the AST to which we are adding the \
- directive
- :type root: subclass of :py:class:`fparser.common.base_classes.Statement`
- :param line: the fparser object which we will manipulate to create \
- the desired directive.
- :type line: :py:class:`fparser.common.readfortran.Comment`
- :param str position: e.g. 'begin' or 'end' (language specific)
- :param str dir_type: the type of directive that this is (e.g. \
- 'parallel do')
- '''
- def __init__(self, root, line, position, dir_type):
- if dir_type not in self._types:
- raise RuntimeError(f"Error, unrecognised directive type "
- f"'{dir_type}'. Should be one of {self._types}")
- if position not in self._positions:
- raise RuntimeError(f"Error, unrecognised position '{position}'. "
- f"Should be one of {self._positions}")
- self._my_type = dir_type
- self._position = position
- Comment.__init__(self, root, line)
-
- @property
- def type(self):
- '''
- :returns: the type of this Directive.
- :rtype: str
- '''
- return self._my_type
-
- @property
- def position(self):
- '''
- :returns: the position of this Directive ('begin' or 'end').
- :rtype: str
- '''
- return self._position
-
-
-class OMPDirective(Directive):
- '''
- Subclass Directive for OpenMP directives so we can reason about
- them when walking the tree.
-
- :param root: the parent node in the AST to which we are adding the \
- directive.
- :type root: subclass of :py:class:`fparser.common.base_classes.Statement`
- :param line: the fparser object which we will manipulate to create \
- the desired directive.
- :type line: :py:class:`fparser.common.readfortran.Comment`
- :param str position: e.g. 'begin' or 'end' (language specific).
- :param str dir_type: the type of directive that this is (e.g. \
- 'parallel do').
- '''
- def __init__(self, root, line, position, dir_type):
- self._types = ["parallel do", "parallel", "do", "master", "single",
- "taskloop", "taskwait", "declare", "target", "teams",
- "teams distribute parallel do"]
- self._positions = ["begin", "end"]
-
- super(OMPDirective, self).__init__(root, line, position, dir_type)
-
-
-class ACCDirective(Directive):
- '''
- Subclass Directive for OpenACC directives so we can reason about them
- when walking the tree.
-
- :param root: the parent node in the AST to which we are adding the \
- directive.
- :type root: subclass of :py:class:`fparser.common.base_classes.Statement`
- :param line: the fparser object which we will manipulate to create \
- the desired directive.
- :type line: :py:class:`fparser.common.readfortran.Comment`
- :param str position: e.g. 'begin' or 'end' (language specific).
- :param str dir_type: the type of directive that this is (e.g. \
- 'loop').
- '''
- def __init__(self, root, line, position, dir_type):
- self._types = ["parallel", "kernels", "enter data", "loop", "routine"]
- self._positions = ["begin", "end"]
-
- super(ACCDirective, self).__init__(root, line, position, dir_type)
-
-
-# This section provides new classes which provide a relatively high
-# level interface to creating code and adding code to an existing ast
-
-
-class BaseGen():
- ''' The base class for all classes that are responsible for generating
- distinct code elements (modules, subroutines, do loops etc.) '''
- def __init__(self, parent, root):
- self._parent = parent
- self._root = root
- self._children = []
-
- @property
- def parent(self):
- ''' Returns the parent of this object '''
- return self._parent
-
- @property
- def children(self):
- ''' Returns the list of children of this object '''
- return self._children
-
- @property
- def root(self):
- ''' Returns the root of the tree containing this object '''
- return self._root
-
- def add(self, new_object, position=None):
- '''Adds a new object to the tree. The actual position is determined by
- the position argument. Note, there are two trees, the first is
- the f2pygen object tree, the other is the f2py generated code
- tree. These are similar but different. At the moment we
- specify where to add things in terms of the f2pygen tree
- (which is a higher level api) but we also insert into the f2py
- tree at exactly the same location which needs to be sorted out
- at some point.
-
- '''
-
- # By default the position is 'append'. We set it up this way for
- # safety because in python, default arguments are instantiated
- # as objects at the time of definition. If this object is
- # subsequently modified then the value of the default argument
- # is modified for subsequent calls of this routine.
- if position is None:
- position = ["append"]
-
- if position[0] == "auto":
- raise Exception("Error: BaseGen:add: auto option must be "
- "implemented by the sub class!")
- options = ["append", "first", "after", "before", "insert",
- "before_index", "after_index"]
- if position[0] not in options:
- raise Exception(f"Error: BaseGen:add: supported positions are "
- f"{options} but found {position[0]}")
- if position[0] == "append":
- self.root.content.append(new_object.root)
- elif position[0] == "first":
- self.root.content.insert(0, new_object.root)
- elif position[0] == "insert":
- index = position[1]
- self.root.content.insert(index, new_object.root)
- elif position[0] == "after":
- idx = index_of_object(self.root.content, position[1])
- self.root.content.insert(idx+1, new_object.root)
- elif position[0] == "after_index":
- self.root.content.insert(position[1]+1, new_object.root)
- elif position[0] == "before_index":
- self.root.content.insert(position[1], new_object.root)
- elif position[0] == "before":
- try:
- idx = index_of_object(self.root.content, position[1])
- except Exception as err:
- print(str(err))
- raise RuntimeError(
- "Failed to find supplied object in existing content - "
- "is it a child of the parent?")
- self.root.content.insert(idx, new_object.root)
- else:
- raise Exception("Error: BaseGen:add: internal error, should "
- "not get to here")
- self.children.append(new_object)
-
- def previous_loop(self):
- ''' Returns the *last* occurrence of a loop in the list of
- siblings of this node '''
- from fparser.one.block_statements import Do
- for sibling in reversed(self.root.content):
- if isinstance(sibling, Do):
- return sibling
- raise RuntimeError("Error, no loop found - there is no previous loop")
-
- def last_declaration(self):
- '''Returns the *last* occurrence of a Declaration in the list of
- siblings of this node
-
- '''
- from fparser.one.typedecl_statements import TypeDeclarationStatement
- for sibling in reversed(self.root.content):
- if isinstance(sibling, TypeDeclarationStatement):
- return sibling
-
- raise RuntimeError("Error, no variable declarations found")
-
- def start_parent_loop(self, debug=False):
- ''' Searches for the outer-most loop containing this object. Returns
- the index of that line in the content of the parent. '''
- from fparser.one.block_statements import Do
- if debug:
- print("Entered before_parent_loop")
- print(f"The type of the current node is {type(self.root)}")
- print(("If the current node is a Do loop then move up to the "
- "top of the do loop nest"))
-
- # First off, check that we do actually have an enclosing Do loop
- current = self.root
- while not isinstance(current, Do) and getattr(current, 'parent', None):
- current = current.parent
- if not isinstance(current, Do):
- raise RuntimeError("This node has no enclosing Do loop")
-
- current = self.root
- local_current = self
- while isinstance(current.parent, Do):
- if debug:
- print("Parent is a do loop so moving to the parent")
- current = current.parent
- local_current = local_current.parent
- if debug:
- print("The type of the current node is now " + str(type(current)))
- print("The type of parent is " + str(type(current.parent)))
- print("Finding the loops position in its parent ...")
- index = current.parent.content.index(current)
- if debug:
- print("The loop's index is ", index)
- parent = current.parent
- local_current = local_current.parent
- if debug:
- print("The type of the object at the index is " +
- str(type(parent.content[index])))
- print("If preceding node is a directive then move back one")
- if index == 0:
- if debug:
- print("current index is 0 so finish")
- elif isinstance(parent.content[index-1], Directive):
- if debug:
- print(
- f"preceding node is a directive so find out what type ..."
- f"\n type is {parent.content[index-1].position}"
- f"\n diretive is {parent.content[index-1]}")
- if parent.content[index-1].position == "begin":
- if debug:
- print("type of directive is begin so move back one")
- index -= 1
- else:
- if debug:
- print("directive type is not begin so finish")
- else:
- if debug:
- print("preceding node is not a directive so finish")
- if debug:
- print("type of final location ", type(parent.content[index]))
- print("code for final location ", str(parent.content[index]))
- return local_current, parent.content[index]
-
-
-class ProgUnitGen(BaseGen):
- ''' Functionality relevant to program units (currently modules,
- subroutines)'''
- def __init__(self, parent, sub):
- BaseGen.__init__(self, parent, sub)
-
- def add(self, content, position=None, bubble_up=False):
- '''
- Specialise the add method to provide module- and subroutine-
- -specific intelligent adding of use statements, implicit
- none statements and declarations if the position argument
- is set to auto (which is the default).
-
- :param content: the Node (or sub-tree of Nodes) to add in to \
- the AST.
- :type content: :py:class:`psyclone.f2pygen.BaseGen`
- :param list position: where to insert the node. One of "append", \
- "first", "insert", "after", "after_index", \
- "before_index", "before" or "auto". For the \
- *_index options, the second element of the \
- list holds the integer index.
- :param bool bubble_up: whether or not object (content) is in the \
- process of being bubbled-up.
- '''
- # By default the position is 'auto'. We set it up this way for
- # safety because in python, default arguments are instantiated
- # as objects at the time of definition. If this object is
- # subsequently modified then the value of the default argument
- # is modified for subsequent calls of this routine.
- if position is None:
- position = ["auto"]
-
- # For an object to be added to another we require that they
- # share a common ancestor. This means that the added object must
- # have the current object or one of its ancestors as an ancestor.
- # Loop over the ancestors of this object (starting with itself)
- self_ancestor = self.root
- while self_ancestor:
- # Loop over the ancestors of the object being added
- obj_parent = content.root.parent
- while (obj_parent != self_ancestor and
- getattr(obj_parent, 'parent', None)):
- obj_parent = obj_parent.parent
- if obj_parent == self_ancestor:
- break
- # Object being added is not an ancestor of the current
- # self_ancestor so move one level back up the tree and
- # try again
- if getattr(self_ancestor, 'parent', None):
- self_ancestor = self_ancestor.parent
- else:
- break
-
- if obj_parent != self_ancestor:
- raise RuntimeError(
- f"Cannot add '{content}' to '{self}' because it is not a "
- f"descendant of it or of any of its ancestors.")
-
- if bubble_up:
- # If content has been passed on (is being bubbled up) then change
- # its parent to be this object
- content.root.parent = self.root
-
- if position[0] != "auto":
- # position[0] is not 'auto' so the baseclass can deal with it
- BaseGen.add(self, content, position)
- else:
- # position[0] == "auto" so insert in a context sensitive way
- if isinstance(content, BaseDeclGen):
-
- if isinstance(content, (DeclGen, CharDeclGen)):
- # have I already been declared?
- for child in self._children:
- if isinstance(child, (DeclGen, CharDeclGen)):
- # is this declaration the same type as me?
- if child.root.name == content.root.name:
- # we are modifying the list so we need
- # to iterate over a copy
- for var_name in content.root.entity_decls[:]:
- for child_name in child.root.entity_decls:
- if var_name.lower() == \
- child_name.lower():
- content.root.entity_decls.\
- remove(var_name)
- if not content.root.entity_decls:
- # return as all variables in
- # this declaration already
- # exist
- return
- if isinstance(content, TypeDeclGen):
- # have I already been declared?
- for child in self._children:
- if isinstance(child, TypeDeclGen):
- # is this declaration the same type as me?
- if child.root.selector[1] == \
- content.root.selector[1]:
- # we are modifying the list so we need
- # to iterate over a copy
- for var_name in content.root.entity_decls[:]:
- for child_name in child.root.entity_decls:
- if var_name.lower() == \
- child_name.lower():
- content.root.entity_decls.\
- remove(var_name)
- if not content.root.entity_decls:
- # return as all variables in
- # this declaration already
- # exist
- return
-
- index = 0
- # skip over any use statements
- index = self._skip_use_and_comments(index)
- # skip over implicit none if it exists
- index = self._skip_imp_none_and_comments(index)
- # skip over any declarations which have an intent
- try:
- intent = True
- while intent:
- intent = False
- for attr in self.root.content[index].attrspec:
- if attr.find("intent") == 0:
- intent = True
- index += 1
- break
- except AttributeError:
- pass
- elif isinstance(content.root, fparser1.statements.Use):
- # have I already been declared?
- for child in self._children:
- if isinstance(child, UseGen):
- if child.root.name == content.root.name:
- # found an existing use with the same name
- if not child.root.isonly and not \
- content.root.isonly:
- # both are generic use statements so
- # skip this declaration
- return
- if child.root.isonly and not content.root.isonly:
- # new use is generic and existing use
- # is specific so we can safely add
- pass
- if not child.root.isonly and content.root.isonly:
- # existing use is generic and new use
- # is specific so we can skip this
- # declaration
- return
- if child.root.isonly and content.root.isonly:
- # we are modifying the list so we need
- # to iterate over a copy
- for new_name in content.root.items[:]:
- for existing_name in child.root.items:
- if existing_name.lower() == \
- new_name.lower():
- content.root.items.remove(new_name)
- if not content.root.items:
- return
- index = 0
- elif isinstance(content, ImplicitNoneGen):
- # does implicit none already exist?
- for child in self._children:
- if isinstance(child, ImplicitNoneGen):
- return
- # skip over any use statements
- index = 0
- index = self._skip_use_and_comments(index)
- else:
- index = len(self.root.content) - 1
- self.root.content.insert(index, content.root)
- self._children.append(content)
-
- def _skip_use_and_comments(self, index):
- ''' skip over any use statements and comments in the ast '''
- while isinstance(self.root.content[index],
- fparser1.statements.Use) or\
- isinstance(self.root.content[index],
- fparser1.statements.Comment):
- index += 1
- # now roll back to previous Use
- while isinstance(self.root.content[index-1],
- fparser1.statements.Comment):
- index -= 1
- return index
-
- def _skip_imp_none_and_comments(self, index):
- ''' skip over an implicit none statement if it exists and any
- comments before it '''
- end_index = index
- while isinstance(self.root.content[index],
- fparser1.typedecl_statements.Implicit) or\
- isinstance(self.root.content[index],
- fparser1.statements.Comment):
- if isinstance(self.root.content[index],
- fparser1.typedecl_statements.Implicit):
- end_index = index + 1
- break
- else:
- index = index + 1
- return end_index
-
-
-class PSyIRGen(BaseGen):
- ''' Create a Fortran block of code that comes from a given PSyIR tree.
-
- :param parent: node in AST to which we are adding the PSyIR block.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param content: the PSyIR tree we are adding.
- :type content: :py:class:`psyclone.psyir.nodes.Node`
-
- '''
-
- def __init__(self, parent, content):
- # Import FortranWriter here to avoid circular-dependency
- # pylint: disable=import-outside-toplevel
- from psyclone.psyir.backend.fortran import FortranWriter
- # We need the Config object in order to see whether or not to disable
- # the validation performed in the PSyIR backend.
- config = Config.get()
-
- # Use the PSyIR Fortran backend to generate Fortran code of the
- # supplied PSyIR tree and pass the resulting code to the fparser1
- # Fortran parser.
- fortran_writer = FortranWriter(
- check_global_constraints=config.backend_checks_enabled)
- reader = FortranStringReader(fortran_writer(content),
- ignore_comments=False)
- # Set reader as free form, strict
- reader.set_format(FortranFormat(True, True))
- fparser1_parser = FortranParser(reader, ignore_comments=False)
- fparser1_parser.parse()
-
- # If the fparser content is larger than 1, add all the nodes but
- # the last one as siblings of self. This is done because self
- # can only represent one node.
- for fparser_node in fparser1_parser.block.content[:-1]:
- f2pygen_node = BaseGen(parent, fparser_node)
- f2pygen_node.root.parent = parent.root
- parent.add(f2pygen_node)
-
- # Update this f2pygen node to be equivalent to the last of the
- # fparser nodes that represent the provided content.
- BaseGen.__init__(self, parent, fparser1_parser.block.content[-1])
- self.root.parent = parent.root
-
-
-class ModuleGen(ProgUnitGen):
- ''' create a fortran module '''
- def __init__(self, name="", contains=True, implicitnone=True):
- from fparser import api
-
- code = '''\
-module vanilla
-'''
- if contains:
- code += '''\
-contains
-'''
- code += '''\
-end module vanilla
-'''
- tree = api.parse(code, ignore_comments=False)
- module = tree.content[0]
- module.name = name
- endmod = module.content[len(module.content)-1]
- endmod.name = name
- ProgUnitGen.__init__(self, None, module)
- if implicitnone:
- self.add(ImplicitNoneGen(self))
-
- def add_raw_subroutine(self, content):
- ''' adds a subroutine to the module that is a raw f2py parse object.
- This is used for inlining kernel subroutines into a module.
- '''
- from psyclone.parse.kernel import KernelProcedure
- if not isinstance(content, KernelProcedure):
- raise Exception(
- "Expecting a KernelProcedure type but received " +
- str(type(content)))
- content.ast.parent = self.root
- # add content after any existing subroutines
- index = len(self.root.content) - 1
- self.root.content.insert(index, content.ast)
-
-
-class CommentGen(BaseGen):
- ''' Create a Fortran Comment '''
- def __init__(self, parent, content):
- '''
- :param parent: node in AST to which to add the Comment as a child
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str content: the content of the comment
- '''
- reader = FortranStringReader("! content\n", ignore_comments=False)
- reader.set_format(FortranFormat(True, True)) # free form, strict
- subline = reader.next()
-
- my_comment = Comment(parent.root, subline)
- my_comment.content = content
-
- BaseGen.__init__(self, parent, my_comment)
-
-
-class DirectiveGen(BaseGen):
- '''
- Class for creating a Fortran directive, e.g. OpenMP or OpenACC.
-
- :param parent: node in AST to which to add directive as a child.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str language: the type of directive (e.g. OMP or ACC).
- :param str position: "end" if this is the end of a directive block.
- :param str directive_type: the directive itself (e.g. "PARALLEL DO").
- :param str content: any additional arguments to add to the directive \
- (e.g. "PRIVATE(ji)").
-
- :raises RuntimeError: if an unrecognised directive language is specified.
- '''
- def __init__(self, parent, language, position, directive_type, content=""):
- self._supported_languages = ["omp", "acc"]
- self._language = language
- self._directive_type = directive_type
-
- reader = FortranStringReader("! content\n", ignore_comments=False)
- reader.set_format(FortranFormat(True, True)) # free form, strict
- subline = reader.next()
-
- if language == "omp":
- my_comment = OMPDirective(parent.root, subline, position,
- directive_type)
- my_comment.content = "$omp"
- elif language == "acc":
- my_comment = ACCDirective(parent.root, subline, position,
- directive_type)
- my_comment.content = "$acc"
- else:
- raise RuntimeError(
- f"Error, unsupported directive language. Expecting one of "
- f"{self._supported_languages} but found '{language}'")
- if position == "end":
- my_comment.content += " end"
- my_comment.content += " " + directive_type
- if content != "":
- my_comment.content += " " + content
-
- BaseGen.__init__(self, parent, my_comment)
-
-
-class ImplicitNoneGen(BaseGen):
- ''' Generate a Fortran 'implicit none' statement '''
- def __init__(self, parent):
- '''
- :param parent: node in AST to which to add 'implicit none' as a child
- :type parent: :py:class:`psyclone.f2pygen.ModuleGen` or
- :py:class:`psyclone.f2pygen.SubroutineGen`
-
- :raises Exception: if `parent` is not a ModuleGen or SubroutineGen
- '''
- if not isinstance(parent, ModuleGen) and not isinstance(parent,
- SubroutineGen):
- raise Exception(
- f"The parent of ImplicitNoneGen must be a module or a "
- f"subroutine, but found {type(parent)}")
- reader = FortranStringReader("IMPLICIT NONE\n")
- reader.set_format(FortranFormat(True, True)) # free form, strict
- subline = reader.next()
-
- from fparser.one.typedecl_statements import Implicit
- my_imp_none = Implicit(parent.root, subline)
-
- BaseGen.__init__(self, parent, my_imp_none)
-
-
-class SubroutineGen(ProgUnitGen):
- ''' Generate a Fortran subroutine '''
- def __init__(self, parent, name="", args=None, implicitnone=False):
- '''
- :param parent: node in AST to which to add Subroutine as a child
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str name: name of the Fortran subroutine
- :param list args: list of arguments accepted by the subroutine
- :param bool implicitnone: whether or not we should specify
- "implicit none" for the body of this
- subroutine
- '''
- reader = FortranStringReader(
- "subroutine vanilla(vanilla_arg)\nend subroutine")
- reader.set_format(FortranFormat(True, True)) # free form, strict
- subline = reader.next()
- endsubline = reader.next()
-
- from fparser.one.block_statements import Subroutine, EndSubroutine
- self._sub = Subroutine(parent.root, subline)
- self._sub.name = name
- if args is None:
- args = []
- self._sub.args = args
- endsub = EndSubroutine(self._sub, endsubline)
- self._sub.content.append(endsub)
- ProgUnitGen.__init__(self, parent, self._sub)
- if implicitnone:
- self.add(ImplicitNoneGen(self))
-
- @property
- def args(self):
- ''' Returns the list of arguments of this subroutine '''
- return self._sub.args
-
- @args.setter
- def args(self, namelist):
- ''' sets the subroutine arguments to the values in the list provide.'''
- self._sub.args = namelist
-
-
-class CallGen(BaseGen):
- ''' Generates a Fortran call of a subroutine '''
- def __init__(self, parent, name="", args=None):
- '''
- :param parent: node in AST to which to add CallGen as a child
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str name: the name of the routine to call
- :param list args: list of arguments to pass to the call
- '''
- reader = FortranStringReader("call vanilla(vanilla_arg)")
- reader.set_format(FortranFormat(True, True)) # free form, strict
- myline = reader.next()
-
- from fparser.one.block_statements import Call
- self._call = Call(parent.root, myline)
- self._call.designator = name
- if args is None:
- args = []
- self._call.items = args
-
- BaseGen.__init__(self, parent, self._call)
-
-
-class UseGen(BaseGen):
- ''' Generate a Fortran use statement '''
- def __init__(self, parent, name="", only=False, funcnames=None):
- '''
- :param parent: node in AST to which to add UseGen as a child
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str name: name of the module to USE
- :param bool only: whether this USE has an ONLY clause
- :param list funcnames: list of names to follow ONLY clause
- '''
- reader = FortranStringReader("use kern,only : func1_kern=>func1")
- reader.set_format(FortranFormat(True, True)) # free form, strict
- myline = reader.next()
- root = parent.root
- from fparser.one.block_statements import Use
- use = Use(root, myline)
- use.name = name
- use.isonly = only
- if funcnames is None:
- funcnames = []
- use.isonly = False
- local_funcnames = funcnames[:]
- use.items = local_funcnames
- BaseGen.__init__(self, parent, use)
-
-
-def adduse(name, parent, only=False, funcnames=None):
- '''
- Adds a use statement with the specified name to the supplied object.
- This routine is required when modifying an existing AST (e.g. when
- modifying a kernel). The classes are used when creating an AST from
- scratch (for the PSy layer).
-
- :param str name: name of module to USE
- :param parent: node in fparser1 AST to which to add this USE as a child
- :type parent: :py:class:`fparser.one.block_statements.*`
- :param bool only: whether this USE has an "ONLY" clause
- :param list funcnames: list of quantities to follow the "ONLY" clause
-
- :returns: an fparser1 Use object
- :rtype: :py:class:`fparser.one.block_statements.Use`
- '''
- reader = FortranStringReader("use kern,only : func1_kern=>func1")
- reader.set_format(FortranFormat(True, True)) # free form, strict
- myline = reader.next()
-
- # find an appropriate place to add in our use statement
- while not isinstance(parent, (fparser1.block_statements.Program,
- fparser1.block_statements.Module,
- fparser1.block_statements.Subroutine)):
- parent = parent.parent
- use = fparser1.block_statements.Use(parent, myline)
- use.name = name
- use.isonly = only
- if funcnames is None:
- funcnames = []
- use.isonly = False
- use.items = funcnames
-
- parent.content.insert(0, use)
- return use
-
-
-class AllocateGen(BaseGen):
- ''' Generates a Fortran allocate statement '''
- def __init__(self, parent, content, mold=None):
- '''
- :param parent: node to which to add this ALLOCATE as a child
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param content: string or list of variables to allocate
- :type content: list of strings or a single string
- :param mold: A string to be used as the 'mold' parameter of ALLOCATE.
- :type mold: str or None.
-
- :raises RuntimeError: if `content` is not of correct type
- '''
- reader = FortranStringReader("allocate(dummy)")
- reader.set_format(FortranFormat(True, False)) # free form, strict
- myline = reader.next()
- self._decl = fparser1.statements.Allocate(parent.root, myline)
- if isinstance(content, str):
- self._decl.items = [content]
- elif isinstance(content, list):
- self._decl.items = content
- else:
- raise RuntimeError(
- f"AllocateGen expected the content argument to be a str or"
- f" a list, but found {type(content)}")
- if mold:
- self._decl.items.append(f"mold={mold}")
- BaseGen.__init__(self, parent, self._decl)
-
-
-class DeallocateGen(BaseGen):
- ''' Generates a Fortran deallocate statement '''
- def __init__(self, parent, content):
- '''
- :param parent: node to which to add this DEALLOCATE as a child
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param content: string or list of variables to deallocate
- :type content: list of strings or a single string
-
- :raises RuntimeError: if `content` is not of correct type
- '''
- reader = FortranStringReader("deallocate(dummy)")
- reader.set_format(FortranFormat(True, False)) # free form, strict
- myline = reader.next()
- self._decl = fparser1.statements.Deallocate(parent.root, myline)
- if isinstance(content, str):
- self._decl.items = [content]
- elif isinstance(content, list):
- self._decl.items = content
- else:
- raise RuntimeError(
- f"DeallocateGen expected the content argument to be a str"
- f" or a list, but found {type(content)}")
- BaseGen.__init__(self, parent, self._decl)
-
-
-class BaseDeclGen(BaseGen, metaclass=abc.ABCMeta):
- '''
- Abstract base class for all types of Fortran declaration. Uses the
- abc module so it cannot be instantiated.
-
- :param parent: node to which to add this declaration as a child.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str datatype: the (intrinsic) type for this declaration.
- :param list entity_decls: list of variable names to declare.
- :param str intent: the INTENT attribute of this declaration.
- :param bool pointer: whether or not this is a pointer declaration.
- :param str dimension: the DIMENSION specifier (i.e. the xx in \
- DIMENSION(xx)).
- :param bool allocatable: whether this declaration is for an \
- ALLOCATABLE quantity.
- :param bool save: whether this declaration has the SAVE attribute.
- :param bool target: whether this declaration has the TARGET attribute.
- :param initial_values: initial value to give each variable.
- :type initial_values: list of str with same no. of elements as entity_decls
- :param bool private: whether this declaration has the PRIVATE attribute \
- (default is False).
-
- :raises RuntimeError: if no variable names are specified.
- :raises RuntimeError: if the wrong number or type of initial values are \
- supplied.
- :raises RuntimeError: if initial values are supplied for a quantity that \
- is allocatable or has INTENT(in).
- :raises NotImplementedError: if initial values are supplied for array \
- variables (dimension != "").
-
- '''
- _decl = None # Will hold the declaration object created by sub-class
-
- def __init__(self, parent, datatype="", entity_decls=None, intent="",
- pointer=False, dimension="", allocatable=False,
- save=False, target=False, initial_values=None, private=False):
- if entity_decls is None:
- raise RuntimeError(
- "Cannot create a variable declaration without specifying the "
- "name(s) of the variable(s)")
-
- # If initial values have been supplied then check that there
- # are the right number of them and that they are consistent
- # with the type of the variable(s) being declared.
- if initial_values:
- if len(initial_values) != len(entity_decls):
- raise RuntimeError(
- f"f2pygen.DeclGen.init: number of initial values supplied "
- f"({len(initial_values)}) does not match the number of "
- f"variables to be declared ({len(entity_decls)}: "
- f"{entity_decls})")
- if allocatable:
- raise RuntimeError(
- f"Cannot specify initial values for variable(s) "
- f"{entity_decls} because they have the 'allocatable' "
- f"attribute.")
- if dimension:
- raise NotImplementedError(
- "Specifying initial values for array declarations is not "
- "currently supported.")
- if intent.lower() == "in":
- raise RuntimeError(
- f"Cannot assign (initial) values to variable(s) "
- f"{entity_decls} as they have INTENT(in).")
- # Call sub-class-provided implementation to check actual
- # values provided.
- self._check_initial_values(datatype, initial_values)
-
- # Store the list of variable names
- self._names = entity_decls[:]
-
- # Make a copy of entity_decls as we may modify it
- local_entity_decls = entity_decls[:]
- if initial_values:
- # Create a list of 2-tuples
- value_pairs = zip(local_entity_decls, initial_values)
- # Construct an assignment from each tuple
- self._decl.entity_decls = ["=".join(_) for _ in value_pairs]
- else:
- self._decl.entity_decls = local_entity_decls
-
- # Construct the list of attributes
- my_attrspec = []
- if intent != "":
- my_attrspec.append(f"intent({intent})")
- if pointer:
- my_attrspec.append("pointer")
- if target:
- my_attrspec.append("target")
- if allocatable:
- my_attrspec.append("allocatable")
- if save:
- my_attrspec.append("save")
- if private:
- my_attrspec.append("private")
- if dimension != "":
- my_attrspec.append(f"dimension({dimension})")
- self._decl.attrspec = my_attrspec
-
- super(BaseDeclGen, self).__init__(parent, self._decl)
-
- @property
- def names(self):
- '''
- :returns: the names of the variables being declared.
- :rtype: list of str.
- '''
- return self._names
-
- @property
- def root(self):
- '''
- :returns: the associated Type object.
- :rtype: \
- :py:class:`fparser.one.typedecl_statements.TypeDeclarationStatement`.
- '''
- return self._decl
-
- @abc.abstractmethod
- def _check_initial_values(self, dtype, values):
- '''
- Check that the supplied values are consistent with the requested
- data type. This method must be overridden in any sub-class of
- BaseDeclGen and is called by the BaseDeclGen constructor.
-
- :param str dtype: Fortran type.
- :param list values: list of values as strings.
- :raises RuntimeError: if the supplied values are not consistent \
- with the specified data type or are not \
- supported.
- '''
-
-
-class DeclGen(BaseDeclGen):
- '''Generates a Fortran declaration for variables of various intrinsic
- types (integer, real and logical). For character variables
- CharDeclGen should be used.
-
- :param parent: node to which to add this declaration as a child.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str datatype: the (intrinsic) type for this declaration.
- :param list entity_decls: list of variable names to declare.
- :param str intent: the INTENT attribute of this declaration.
- :param bool pointer: whether or not this is a pointer declaration.
- :param str kind: the KIND attribute to use for this declaration.
- :param str dimension: the DIMENSION specifier (i.e. the xx in \
- DIMENSION(xx)).
- :param bool allocatable: whether this declaration is for an \
- ALLOCATABLE quantity.
- :param bool save: whether this declaration has the SAVE attribute.
- :param bool target: whether this declaration has the TARGET attribute.
- :param initial_values: initial value to give each variable.
- :type initial_values: list of str with same no. of elements as \
- entity_decls
- :param bool private: whether this declaration has the PRIVATE attribute \
- (default is False).
-
- :raises RuntimeError: if datatype is not one of DeclGen.SUPPORTED_TYPES.
-
- '''
- # The Fortran intrinsic types supported by this class
- SUPPORTED_TYPES = ["integer", "real", "logical"]
-
- def __init__(self, parent, datatype="", entity_decls=None, intent="",
- pointer=False, kind="", dimension="", allocatable=False,
- save=False, target=False, initial_values=None, private=False):
-
- dtype = datatype.lower()
- if dtype not in self.SUPPORTED_TYPES:
- raise RuntimeError(
- f"f2pygen.DeclGen.init: Only {self.SUPPORTED_TYPES} types are "
- f"currently supported and you specified '{datatype}'")
-
- fort_fmt = FortranFormat(True, False) # free form, strict
- if dtype == "integer":
- reader = FortranStringReader("integer :: vanilla")
- reader.set_format(fort_fmt)
- myline = reader.next()
- self._decl = fparser1.typedecl_statements.Integer(parent.root,
- myline)
- elif dtype == "real":
- reader = FortranStringReader("real :: vanilla")
- reader.set_format(fort_fmt)
- myline = reader.next()
- self._decl = fparser1.typedecl_statements.Real(parent.root, myline)
- elif dtype == "logical":
- reader = FortranStringReader("logical :: vanilla")
- reader.set_format(fort_fmt)
- myline = reader.next()
- self._decl = fparser1.typedecl_statements.Logical(parent.root,
- myline)
- else:
- # Defensive programming in case SUPPORTED_TYPES is added to
- # but not handled here
- raise InternalError(
- f"Type '{dtype}' is in DeclGen.SUPPORTED_TYPES "
- f"but not handled by constructor.")
-
- # Add any kind-selector
- if kind:
- self._decl.selector = ('', kind)
-
- super(DeclGen, self).__init__(parent=parent, datatype=datatype,
- entity_decls=entity_decls,
- intent=intent, pointer=pointer,
- dimension=dimension,
- allocatable=allocatable, save=save,
- target=target,
- initial_values=initial_values,
- private=private)
-
- def _check_initial_values(self, dtype, values):
- '''
- Check that the supplied values are consistent with the requested
- data type. Note that this checking is fairly basic and does not
- support a number of valid Fortran forms (e.g. arithmetic expressions
- involving constants or parameters).
-
- :param str dtype: Fortran intrinsic type.
- :param list values: list of values as strings.
- :raises RuntimeError: if the supplied values are not consistent \
- with the specified data type or are not \
- supported.
- '''
- from fparser.two.pattern_tools import abs_name, \
- abs_logical_literal_constant, abs_signed_int_literal_constant, \
- abs_signed_real_literal_constant
- if dtype == "logical":
- # Can be .true., .false. or a valid Fortran variable name
- for val in values:
- if not abs_logical_literal_constant.match(val) and \
- not abs_name.match(val):
- raise RuntimeError(
- f"Initial value of '{val}' for a logical variable is "
- f"invalid or unsupported")
- elif dtype == "integer":
- # Can be a an integer expression or a valid Fortran variable name
- for val in values:
- if not abs_signed_int_literal_constant.match(val) and \
- not abs_name.match(val):
- raise RuntimeError(
- f"Initial value of '{val}' for an integer variable is "
- f"invalid or unsupported")
- elif dtype == "real":
- # Can be a floating-point expression or a valid Fortran name
- for val in values:
- if not abs_signed_real_literal_constant.match(val) and \
- not abs_name.match(val):
- raise RuntimeError(
- f"Initial value of '{val}' for a real variable is "
- f"invalid or unsupported")
- else:
- # We should never get to here because we check that the type
- # is supported before calling this routine.
- raise InternalError(
- f"unsupported type '{dtype}' - should be "
- f"one of {DeclGen.SUPPORTED_TYPES}")
-
-
-class CharDeclGen(BaseDeclGen):
- '''
- Generates a Fortran declaration for character variables.
-
- :param parent: node to which to add this declaration as a child.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`.
- :param list entity_decls: list of variable names to declare.
- :param str intent: the INTENT attribute of this declaration.
- :param bool pointer: whether or not this is a pointer declaration.
- :param str kind: the KIND attribute to use for this declaration.
- :param str dimension: the DIMENSION specifier (i.e. the xx in \
- DIMENSION(xx)).
- :param bool allocatable: whether this declaration is for an \
- ALLOCATABLE quantity.
- :param bool save: whether this declaration has the SAVE attribute.
- :param bool target: whether this declaration has the TARGET attribute.
- :param str length: expression to use for the (len=xx) selector.
- :param initial_values: list of initial values, one for each variable. \
- Each of these can be either a variable name or a literal, quoted \
- string (e.g. "'hello'"). Default is None.
- :type initial_values: list of str with same no. of elements as entity_decls
- :param bool private: whether this declaration has the PRIVATE attribute.
-
- '''
- def __init__(self, parent, entity_decls=None, intent="",
- pointer=False, kind="", dimension="", allocatable=False,
- save=False, target=False, length="", initial_values=None,
- private=False):
-
- reader = FortranStringReader(
- "character(len=vanilla_len) :: vanilla")
- reader.set_format(FortranFormat(True, False))
- myline = reader.next()
- self._decl = fparser1.typedecl_statements.Character(parent.root,
- myline)
- # Add character- and kind-selectors
- self._decl.selector = (length, kind)
-
- super(CharDeclGen, self).__init__(parent=parent,
- datatype="character",
- entity_decls=entity_decls,
- intent=intent, pointer=pointer,
- dimension=dimension,
- allocatable=allocatable, save=save,
- target=target,
- initial_values=initial_values,
- private=private)
-
- def _check_initial_values(self, _, values):
- '''
- Check that initial values provided for a Character declaration are
- valid.
- :param _: for consistency with base-class interface.
- :param list values: list of strings containing initial values.
- :raises RuntimeError: if any of the supplied initial values is not \
- valid for a Character declaration.
- '''
- from fparser.two.pattern_tools import abs_name
- # Can be a quoted string or a valid Fortran name
- # TODO it would be nice if fparser.two.pattern_tools provided
- # e.g. abs_character_literal_constant
- for val in values:
- if not abs_name.match(val):
- if not ((val.startswith("'") and val.endswith("'")) or
- (val.startswith('"') and val.endswith('"'))):
- raise RuntimeError(
- f"Initial value of '{val}' for a character variable "
- f"is invalid or unsupported")
-
-
-class TypeDeclGen(BaseDeclGen):
- '''
- Generates a Fortran declaration for variables of a derived type.
-
- :param parent: node to which to add this declaration as a child.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str datatype: the type for this declaration.
- :param list entity_decls: list of variable names to declare.
- :param str intent: the INTENT attribute of this declaration.
- :param bool pointer: whether or not this is a pointer declaration.
- :param str dimension: the DIMENSION specifier (i.e. the xx in \
- DIMENSION(xx)).
- :param bool allocatable: whether this declaration is for an \
- ALLOCATABLE quantity.
- :param bool save: whether this declaration has the SAVE attribute.
- :param bool target: whether this declaration has the TARGET attribute.
- :param bool is_class: whether this is a class rather than type declaration.
- :param bool private: whether or not this declaration has the PRIVATE \
- attribute. (Defaults to False.)
- '''
- def __init__(self, parent, datatype="", entity_decls=None, intent="",
- pointer=False, dimension="", allocatable=False,
- save=False, target=False, is_class=False, private=False):
- if is_class:
- reader = FortranStringReader("class(vanillatype) :: vanilla")
- else:
- reader = FortranStringReader("type(vanillatype) :: vanilla")
- reader.set_format(FortranFormat(True, False)) # free form, strict
- myline = reader.next()
- if is_class:
- self._decl = fparser1.typedecl_statements.Class(parent.root,
- myline)
- else:
- self._decl = fparser1.typedecl_statements.Type(parent.root, myline)
- self._decl.selector = ('', datatype)
-
- super(TypeDeclGen, self).__init__(parent=parent, datatype=datatype,
- entity_decls=entity_decls,
- intent=intent, pointer=pointer,
- dimension=dimension,
- allocatable=allocatable, save=save,
- target=target, private=private)
-
- def _check_initial_values(self, _type, _values):
- '''
- Simply here to override abstract method in base class. It is an
- error if we ever call it because we don't support initial values for
- declarations of derived types.
-
- :param str _type: the type of the Fortran variable to be declared.
- :param list _values: list of str containing initialisation \
- values/expressions.
- :raises InternalError: because specifying initial values for \
- variables of derived type is not supported.
- '''
- raise InternalError(
- "This method should not have been called because initial values "
- "for derived-type declarations are not supported.")
-
-
-class TypeCase(Case):
- ''' Generate a Fortran SELECT CASE statement '''
- # TODO can this whole class be deleted?
- def tofortran(self, isfix=None):
- tab = self.get_indent_tab(isfix=isfix)
- type_str = 'TYPE IS'
- if self.items:
- item_list = []
- for item in self.items:
- item_list.append((' : '.join(item)).strip())
- type_str += f" ( {(', '.join(item_list))} )"
- else:
- type_str = 'CLASS DEFAULT'
- if self.name:
- type_str += ' ' + self.name
- return tab + type_str
-
-
-class SelectionGen(BaseGen):
- ''' Generate a Fortran SELECT block '''
- # TODO can this whole class be deleted?
-
- def __init__(self, parent, expr="UNSET", typeselect=False):
- '''
- Construct a SelectionGen for creating a SELECT block
-
- :param parent: node to which to add this select block as a child
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str expr: the CASE expression
- :param bool typeselect: whether or not this is a SELECT TYPE rather
- than a SELECT CASE
- '''
- self._typeselect = typeselect
- reader = FortranStringReader(
- "SELECT CASE (x)\nCASE (1)\nCASE DEFAULT\nEND SELECT")
- reader.set_format(FortranFormat(True, True)) # free form, strict
- select_line = reader.next()
- self._case_line = reader.next()
- self._case_default_line = reader.next()
- end_select_line = reader.next()
- if self._typeselect:
- select = SelectType(parent.root, select_line)
- else:
- select = SelectCase(parent.root, select_line)
- endselect = EndSelect(select, end_select_line)
- select.expr = expr
- select.content.append(endselect)
- BaseGen.__init__(self, parent, select)
-
- def addcase(self, casenames, content=None):
- ''' Add a case to this select block '''
- if content is None:
- content = []
- if self._typeselect:
- case = TypeCase(self.root, self._case_line)
- else:
- case = Case(self.root, self._case_line)
- case.items = [casenames]
- self.root.content.insert(0, case)
- idx = 0
- for stmt in content:
- idx += 1
- self.root.content.insert(idx, stmt.root)
-
- def adddefault(self):
- ''' Add the default case to this select block '''
- if self._typeselect:
- case_default = TypeCase(self.root, self._case_default_line)
- else:
- case_default = Case(self.root, self._case_default_line)
- self.root.content.insert(len(self.root.content)-1, case_default)
-
-
-class DoGen(BaseGen):
- ''' Create a Fortran Do loop '''
- def __init__(self, parent, variable_name, start, end, step=None):
- '''
- :param parent: the node to which to add this do loop as a child
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str variable_name: the name of the loop variable
- :param str start: start value for Do loop
- :param str end: upper-limit of Do loop
- :param str step: increment to use in Do loop
- '''
- reader = FortranStringReader("do i=1,n\nend do")
- reader.set_format(FortranFormat(True, True)) # free form, strict
- doline = reader.next()
- enddoline = reader.next()
- dogen = fparser1.block_statements.Do(parent.root, doline)
- dogen.loopcontrol = variable_name + "=" + start + "," + end
- if step is not None:
- dogen.loopcontrol = dogen.loopcontrol + "," + step
- enddo = fparser1.block_statements.EndDo(dogen, enddoline)
- dogen.content.append(enddo)
-
- BaseGen.__init__(self, parent, dogen)
-
- def add(self, content, position=None, bubble_up=False):
- if position is None:
- position = ["auto"]
-
- if position[0] == "auto" and bubble_up: # pragma: no cover
- # There's currently no case where a bubbled-up statement
- # will live within a do loop so bubble it up again.
- self.parent.add(content, bubble_up=True)
- return
-
- if position[0] == "auto" or position[0] == "append":
- if (position[0] == "auto" and
- bubble_up_type(content)): # pragma: no cover
- # use and declaration statements cannot appear in a do loop
- # so pass on to parent
- self.parent.add(content, bubble_up=True)
- return
- else:
- # append at the end of the loop. This is not a simple
- # append as the last element in the loop is the "end
- # do" so we insert at the penultimate location
- BaseGen.add(self, content,
- position=["insert", len(self.root.content)-1])
- else:
- BaseGen.add(self, content, position=position)
-
-
-class IfThenGen(BaseGen):
- ''' Generate a fortran if, then, end if statement. '''
-
- def __init__(self, parent, clause):
- '''
- :param parent: Node to which to add this IfThen as a child
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str clause: the condition, xx, to evaluate in the if(xx)then
- '''
- reader = FortranStringReader("if (dummy) then\nend if")
- reader.set_format(FortranFormat(True, True)) # free form, strict
- ifthenline = reader.next()
- endifline = reader.next()
-
- my_if = fparser1.block_statements.IfThen(parent.root, ifthenline)
- my_if.expr = clause
- my_endif = fparser1.block_statements.EndIfThen(my_if, endifline)
- my_if.content.append(my_endif)
-
- BaseGen.__init__(self, parent, my_if)
-
- def add(self, content, position=None):
- if position is None:
- position = ["auto"]
- if position[0] == "auto" or position[0] == "append":
- if position[0] == "auto" and bubble_up_type(content):
- # use and declaration statements cannot appear in an if
- # block so pass on (bubble-up) to parent
- self.parent.add(content, bubble_up=True)
- else:
- # append at the end of the loop. This is not a simple
- # append as the last element in the if is the "end if"
- # so we insert at the penultimate location
- BaseGen.add(self, content,
- position=["insert", len(self.root.content)-1])
- else:
- BaseGen.add(self, content, position=position)
-
-
-class AssignGen(BaseGen):
- ''' Generates a Fortran statement where a value is assigned to a
- variable quantity '''
-
- def __init__(self, parent, lhs="", rhs="", pointer=False):
- '''
- :param parent: the node to which to add this assignment as a child
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param str lhs: the LHS of the assignment expression
- :param str rhs: the RHS of the assignment expression
- :param bool pointer: whether or not this is a pointer assignment
- '''
- if pointer:
- reader = FortranStringReader("lhs=>rhs")
- else:
- reader = FortranStringReader("lhs=rhs")
- reader.set_format(FortranFormat(True, True)) # free form, strict
- myline = reader.next()
- if pointer:
- self._assign = fparser1.statements.PointerAssignment(parent.root,
- myline)
- else:
- self._assign = fparser1.statements.Assignment(parent.root, myline)
- self._assign.expr = rhs
- self._assign.variable = lhs
- BaseGen.__init__(self, parent, self._assign)
diff --git a/src/psyclone/gen_kernel_stub.py b/src/psyclone/gen_kernel_stub.py
index 77db44ffd2..76593d7f4b 100644
--- a/src/psyclone/gen_kernel_stub.py
+++ b/src/psyclone/gen_kernel_stub.py
@@ -49,6 +49,7 @@
from psyclone.errors import GenerationError
from psyclone.parse.utils import ParseError
from psyclone.configuration import Config, LFRIC_API_NAMES
+from psyclone.psyir.backend.fortran import FortranWriter
def generate(filename, api=""):
@@ -60,13 +61,13 @@ def generate(filename, api=""):
Kernel Metadata must be presented in the standard Kernel
format.
- :param str filename: the name of the file for which to create a \
- kernel stub for.
- :param str api: the name of the API for which to create a kernel \
- stub. Must be one of the supported stub APIs.
+ :param str filename: the name of the file for which to create a
+ kernel stub for.
+ :param str api: the name of the API for which to create a kernel
+ stub. Must be one of the supported stub APIs.
- :returns: root of fparser1 parse tree for the stub routine.
- :rtype: :py:class:`fparser.one.block_statements.Module`
+ :returns: the kernel stub of the metadata in the given kernel file.
+ :rtype: str
:raises GenerationError: if an invalid stub API is specified.
:raises IOError: if filename does not specify a file.
@@ -97,4 +98,4 @@ def generate(filename, api=""):
kernel = LFRicKern()
kernel.load_meta(metadata)
- return kernel.gen_stub
+ return FortranWriter()(kernel.gen_stub)
diff --git a/src/psyclone/gocean1p0.py b/src/psyclone/gocean1p0.py
index 75116cbe46..8bd014680e 100644
--- a/src/psyclone/gocean1p0.py
+++ b/src/psyclone/gocean1p0.py
@@ -58,8 +58,6 @@
from psyclone.domain.gocean import GOceanConstants, GOSymbolTable
from psyclone.errors import GenerationError, InternalError
import psyclone.expression as expr
-from psyclone.f2pygen import (
- DeclGen, UseGen, ModuleGen, SubroutineGen, TypeDeclGen, PSyIRGen)
from psyclone.parse.algorithm import Arg
from psyclone.parse.kernel import Descriptor, KernelType
from psyclone.parse.utils import ParseError
@@ -107,23 +105,6 @@ def __init__(self, invoke_info):
# Create invokes
self._invokes = GOInvokes(invoke_info.calls, self)
- @property
- def gen(self):
- '''
- Generate PSy code for the GOcean api v.1.0.
-
- :rtype: ast
-
- '''
- # create an empty PSy layer module
- psy_module = ModuleGen(self.name)
- # include the kind_params module
- psy_module.add(UseGen(psy_module, name="kind_params_mod"))
- # include the field_mod module
- psy_module.add(UseGen(psy_module, name="field_mod"))
- self.invokes.gen_code(psy_module)
- return psy_module.root
-
class GOInvokes(Invokes):
'''
@@ -167,45 +148,11 @@ def __init__(self, alg_calls, psy):
# those seen so far
index_offsets.append(kern_call.index_offset)
- def gen_code(self, parent):
- '''
- GOcean redefines the Invokes.gen_code() to start using the PSyIR
- backend when possible. In cases where the backend can not be used yet
- (e.g. OpenCL and PSyDataNodes) the parent class will be called. This
- is a temporary workaround to avoid modifying the generator file while
- other APIs still use the f2pygen module for code generation.
- Once the PSyIR backend has generated an output, this is added into a
- f2pygen PSyIRGen block in the f2pygen AST for each Invoke in the
- PSy layer.
-
- :param parent: the parent node in the f2pygen AST to which to add \
- content.
- :type parent: `psyclone.f2pygen.ModuleGen`
- '''
- if self.invoke_list:
- # We just need one invoke as they all have a common root.
- invoke = self.invoke_list[0]
-
- # Lower the GOcean PSyIR to language level so it can be visited
- # by the backends
- invoke.schedule.parent.lower_to_language_level()
- # Then insert it into a f2pygen AST as a PSyIRGen node.
- # Note that other routines besides the Invoke could have been
- # inserted during the lowering (e.g. module-inlined kernels),
- # so have to iterate over all current children of parent.
- for child in invoke.schedule.parent.children:
- parent.add(PSyIRGen(parent, child))
-
class GOInvoke(Invoke):
'''
The GOcean specific invoke class. This passes the GOcean specific
schedule class to the base class so it creates the one we require.
- A set of GOcean infrastructure reserved names are also passed to
- ensure that there are no name clashes. Also overrides the gen_code
- method so that we generate GOcean specific invocation code and
- provides three methods which separate arguments that are arrays from
- arguments that are {integer, real} scalars.
:param alg_invocation: Node in the AST describing the invoke call.
:type alg_invocation: :py:class:`psyclone.parse.InvokeCall`
@@ -225,84 +172,6 @@ def __init__(self, alg_invocation, idx, invokes):
for loop in self.schedule.loops():
loop.create_halo_exchanges()
- @property
- def unique_args_arrays(self):
- ''' find unique arguments that are arrays (defined as those that are
- field objects as opposed to scalars or properties of the grid). '''
- result = []
- for call in self._schedule.kernels():
- for arg in call.arguments.args:
- if arg.argument_type == 'field' and arg.name not in result:
- result.append(arg.name)
- return result
-
- @property
- def unique_args_iscalars(self):
- '''
- :returns: the unique arguments that are scalars of type integer \
- (defined as those that are i_scalar 'space').
- :rtype: list of str.
-
- '''
- result = []
- for call in self._schedule.kernels():
- for arg in args_filter(call.arguments.args, arg_types=["scalar"],
- include_literals=False):
- if arg.space.lower() == "go_i_scalar" and \
- arg.name not in result:
- result.append(arg.name)
- return result
-
- def gen_code(self, parent):
- # pylint: disable=too-many-locals
- '''
- Generates GOcean specific invocation code (the subroutine called
- by the associated invoke call in the algorithm layer). This
- consists of the PSy invocation subroutine and the declaration of
- its arguments.
-
- :param parent: the node in the generated AST to which to add content.
- :type parent: :py:class:`psyclone.f2pygen.ModuleGen`
-
- '''
- # TODO 1010: GOcean doesn't use this method anymore and it can be
- # deleted, but some tests still call it directly.
-
- # Create the subroutine
- invoke_sub = SubroutineGen(parent, name=self.name,
- args=self.psy_unique_var_names)
- parent.add(invoke_sub)
-
- # Generate the code body of this subroutine
- self.schedule.gen_code(invoke_sub)
-
- # Add the subroutine argument declarations for fields
- if self.unique_args_arrays:
- my_decl_arrays = TypeDeclGen(invoke_sub, datatype="r2d_field",
- intent="inout",
- entity_decls=self.unique_args_arrays)
- invoke_sub.add(my_decl_arrays)
-
- # Add the subroutine argument declarations for integer and real scalars
- i_args = []
- for argument in self.schedule.symbol_table.argument_datasymbols:
- if argument.name in self.unique_args_iscalars:
- i_args.append(argument.name)
-
- if i_args:
- my_decl_iscalars = DeclGen(invoke_sub, datatype="INTEGER",
- intent="inout",
- entity_decls=i_args)
- invoke_sub.add(my_decl_iscalars)
-
- # Add remaining local scalar symbols using the symbol table
- for symbol in self.schedule.symbol_table.automatic_datasymbols:
- if isinstance(symbol.datatype, ScalarType):
- invoke_sub.add(DeclGen(
- invoke_sub,
- datatype=symbol.datatype.intrinsic.name,
- entity_decls=[symbol.name]))
-
class GOInvokeSchedule(InvokeSchedule):
''' The GOcean specific InvokeSchedule sub-class. We call the base class
@@ -315,9 +184,6 @@ class GOInvokeSchedule(InvokeSchedule):
layer.
:type alg_calls: Optional[list of
:py:class:`psyclone.parse.algorithm.KernelCall`]
- :param reserved_names: optional list of names that are not allowed in the \
- new InvokeSchedule SymbolTable.
- :type reserved_names: list of str
:param parent: the parent of this node in the PSyIR.
:type parent: :py:class:`psyclone.psyir.nodes.Node`
@@ -325,14 +191,12 @@ class GOInvokeSchedule(InvokeSchedule):
# Textual description of the node.
_text_name = "GOInvokeSchedule"
- def __init__(self, symbol, alg_calls=None, reserved_names=None,
- parent=None, **kwargs):
+ def __init__(self, symbol, alg_calls=None, parent=None, **kwargs):
if not alg_calls:
alg_calls = []
InvokeSchedule.__init__(self, symbol, GOKernCallFactory,
- GOBuiltInCallFactory,
- alg_calls, reserved_names, parent=parent,
- **kwargs)
+ GOBuiltInCallFactory, alg_calls,
+ parent=parent, **kwargs)
# pylint: disable=too-many-instance-attributes
@@ -962,7 +826,7 @@ def lower_bound(self):
def _validate_loop(self):
''' Validate that the GOLoop has all necessary boundaries information
- to lower or gen_code to f2pygen.
+ to lower to language-level PSyIR.
:raises GenerationError: if we can't find an enclosing Schedule.
:raises GenerationError: if this loop does not enclose a Kernel.
@@ -997,19 +861,6 @@ def _validate_loop(self):
f" '{kernel.index_offset}' which does "
f"not match '{index_offset}'.")
- def gen_code(self, parent):
- ''' Create the f2pygen AST for this loop (and update the PSyIR
- representing the loop bounds if necessary).
-
- :param parent: the node in the f2pygen AST to which to add content.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
- '''
- # Check that it is a properly formed GOLoop
- self._validate_loop()
-
- super().gen_code(parent)
-
# pylint: disable=too-few-public-methods
class GOBuiltInCallFactory():
@@ -1080,11 +931,10 @@ class GOKern(CodedKern):
'''
Stores information about GOcean Kernels as specified by the Kernel
metadata. Uses this information to generate appropriate PSy layer
- code for the Kernel instance. Specialises the gen_code method to
- create the appropriate GOcean specific kernel call.
+ code for the Kernel instance.
- :param call: information on the way in which this kernel is called \
- from the Algorithm layer.
+ :param call: information on the way in which this kernel is called
+ from the Algorithm layer.
:type call: :py:class:`psyclone.parse.algorithm.KernelCall`
:param parent: optional node where the kernel call will be inserted.
:type parent: :py:class:`psyclone.psyir.nodes.Node`
diff --git a/src/psyclone/psyGen.py b/src/psyclone/psyGen.py
index a2fb3438f5..64c69f662a 100644
--- a/src/psyclone/psyGen.py
+++ b/src/psyclone/psyGen.py
@@ -48,18 +48,17 @@
from psyclone.configuration import Config, LFRIC_API_NAMES, GOCEAN_API_NAMES
from psyclone.core import AccessType
from psyclone.errors import GenerationError, InternalError, FieldNotFoundError
-from psyclone.f2pygen import (AllocateGen, AssignGen, CommentGen,
- DeclGen, DeallocateGen, DoGen, UseGen)
from psyclone.parse.algorithm import BuiltInCall
from psyclone.psyir.backend.fortran import FortranWriter
-from psyclone.psyir.nodes import (ArrayReference, Call, Container, Literal,
- Loop, Node, OMPDoDirective, Reference,
- Routine, Schedule, Statement, FileContainer)
+from psyclone.psyir.nodes import (
+ ArrayReference, Call, Container, Literal, Loop, Node, OMPDoDirective,
+ Reference, Directive, Routine, Schedule, Statement, Assignment,
+ IntrinsicCall, BinaryOperation, OMPParallelDirective, FileContainer)
from psyclone.psyir.symbols import (ArgumentInterface, ArrayType,
ContainerSymbol, DataSymbol,
- UnresolvedType,
+ UnresolvedType, REAL_TYPE,
ImportInterface, INTEGER_TYPE,
- RoutineSymbol, Symbol)
+ RoutineSymbol)
from psyclone.psyir.symbols.datatypes import UnsupportedFortranType
# The types of 'intent' that an argument to a Fortran subroutine
@@ -94,20 +93,20 @@ def object_index(alist, item):
raise ValueError(f"Item '{item}' not found in list: {alist}")
-def zero_reduction_variables(red_call_list, parent):
+def zero_reduction_variables(red_call_list):
'''zero all reduction variables associated with the calls in the call
list'''
if red_call_list:
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " Zero summation variables"))
- parent.add(CommentGen(parent, ""))
+ first = True
for call in red_call_list:
- call.zero_reduction_variable(parent)
- parent.add(CommentGen(parent, ""))
+ node = call.zero_reduction_variable()
+ if first:
+ node.append_preceding_comment(
+ "Zero summation variables")
+ first = False
-def args_filter(arg_list, arg_types=None, arg_accesses=None, arg_meshes=None,
- include_literals=True):
+def args_filter(arg_list, arg_types=None, arg_accesses=None, arg_meshes=None):
'''
Return all arguments in the supplied list that are of type
arg_types and with access in arg_accesses. If these are not set
@@ -122,8 +121,6 @@ def args_filter(arg_list, arg_types=None, arg_accesses=None, arg_meshes=None,
:py:class:`psyclone.core.access_type.AccessType`
:param arg_meshes: list of meshes that arguments must be on.
:type arg_meshes: list of str
- :param bool include_literals: whether or not to include literal arguments \
- in the returned list.
:returns: list of kernel arguments matching the requirements.
:rtype: list of :py:class:`psyclone.parse.kernel.Descriptor`
@@ -140,11 +137,6 @@ def args_filter(arg_list, arg_types=None, arg_accesses=None, arg_meshes=None,
if arg_meshes:
if argument.mesh not in arg_meshes:
continue
- if not include_literals:
- # We're not including literal arguments so skip this argument
- # if it is literal.
- if argument.is_literal:
- continue
arguments.append(argument)
return arguments
@@ -262,13 +254,43 @@ def name(self):
return "psy_"+self._name
@property
- @abc.abstractmethod
- def gen(self):
- '''Abstract base class for code generation function.
+ def gen(self) -> str:
+ '''
+ Generate PSy-layer code associated with this PSy object.
+
+ :returns: the generated Fortran source.
- :returns: root node of generated Fortran AST.
- :rtype: :py:class:`psyclone.psyir.nodes.Node`
'''
+ # Before the backend we need to add the Invoke initialisations and
+ # declarations, this modifies the PSyIR tree, so we operate on a
+ # copy of the tree.
+ original_container = self.container
+ new_container = self.container.copy()
+ self._container = new_container
+ # We need to update the internal reference to the Schedule, this could
+ # be improved by making all PSy-layer PSyIR DSL nodes, instead of using
+ # PSY->Invokes->Invoke->InvokeSchedule classes
+ for invsch in self.container.walk(InvokeSchedule):
+ invsch.invoke.schedule = invsch
+
+ # Now do the declarations/initialisation on the copied tree
+ for invoke in self.invokes.invoke_list:
+ invoke.setup_psy_layer_symbols()
+
+ # Use the PSyIR Fortran backend to generate Fortran code of the
+ # supplied PSyIR tree.
+ config = Config.get()
+ fortran_writer = FortranWriter(
+ check_global_constraints=config.backend_checks_enabled,
+ disable_copy=True) # We already made the copy manually above
+ result = fortran_writer(new_container)
+
+ # Restore original container (see comment above)
+ self._container = original_container
+ for invsch in self.container.walk(InvokeSchedule):
+ invsch.invoke.schedule = invsch
+
+ return result
class Invokes():
@@ -337,24 +359,6 @@ def get(self, invoke_name):
raise RuntimeError(f"Cannot find an invoke named {search_list} "
f"in {list(self.names)}")
- def gen_code(self, parent):
- '''
- Create the f2pygen AST for each Invoke in the PSy layer.
-
- :param parent: the parent node in the AST to which to add content.
- :type parent: `psyclone.f2pygen.ModuleGen`
-
- :raises GenerationError: if an invoke_list schedule is not an \
- InvokeSchedule.
- '''
- for invoke in self.invoke_list:
- if not isinstance(invoke.schedule, InvokeSchedule):
- raise GenerationError(
- f"An invoke.schedule element of the invoke_list is a "
- f"'{type(invoke.schedule).__name__}', but it should be an "
- f"'InvokeSchedule'.")
- invoke.gen_code(parent)
-
class Invoke():
r'''Manage an individual invoke call.
@@ -369,14 +373,9 @@ class Invoke():
:param invokes: the Invokes instance that contains this Invoke \
instance.
:type invokes: :py:class:`psyclone.psyGen.Invokes`
- :param reserved_names: optional list of reserved names, i.e. names that \
- should not be used e.g. as a PSyclone-created \
- variable name.
- :type reserved_names: list of str
'''
- def __init__(self, alg_invocation, idx, schedule_class, invokes,
- reserved_names=None):
+ def __init__(self, alg_invocation, idx, schedule_class, invokes):
'''Construct an invoke object.'''
self._invokes = invokes
@@ -401,9 +400,6 @@ def __init__(self, alg_invocation, idx, schedule_class, invokes,
# use the position of the invoke
self._name = "invoke_" + str(idx)
- if not reserved_names:
- reserved_names = []
-
# Get a reference to the parent container, if any
container = None
if self.invokes:
@@ -414,7 +410,7 @@ def __init__(self, alg_invocation, idx, schedule_class, invokes,
schedule_symbol = RoutineSymbol(self._name)
self._schedule = schedule_class(schedule_symbol,
alg_invocation.kcalls,
- reserved_names, parent=container)
+ parent=container)
# Add the new Schedule to the top-level PSy Container
if container:
@@ -464,6 +460,18 @@ def invokes(self):
'''
return self._invokes
+ def setup_psy_layer_symbols(self):
+ ''' Declare, initialise and deallocate all symbols required by the
+ PSy-layer Invoke subroutine.
+
+ By default does nothing - PSyKAL DSLs can specialise this method.
+
+ Currently this is done at "lowering", but we could move it to psy-layer
+ creation time to have the symbols available in the transformation
+ scripts.
+
+ '''
+
@property
def name(self):
return self._name
@@ -476,13 +484,6 @@ def alg_unique_args(self):
def psy_unique_vars(self):
return self._psy_unique_vars
- @property
- def psy_unique_var_names(self):
- names = []
- for var in self._psy_unique_vars:
- names.append(var.name)
- return names
-
@property
def schedule(self):
return self._schedule
@@ -638,24 +639,6 @@ def unique_declns_by_intent(self, argument_types, intrinsic_type=None):
declns["inout"].append(arg)
return declns
- def gen(self):
- from psyclone.f2pygen import ModuleGen
- module = ModuleGen("container")
- self.gen_code(module)
- return module.root
-
- @abc.abstractmethod
- def gen_code(self, parent):
- '''
- Generates invocation code (the subroutine called by the associated
- invoke call in the algorithm layer). This consists of the PSy
- invocation subroutine and the declaration of its arguments.
-
- :param parent: the node in the generated AST to which to add content.
- :type parent: :py:class:`psyclone.f2pygen.ModuleGen`
-
- '''
-
class InvokeSchedule(Routine):
'''
@@ -691,16 +674,11 @@ class InvokeSchedule(Routine):
_text_name = "InvokeSchedule"
def __init__(self, symbol, KernFactory, BuiltInFactory, alg_calls=None,
- reserved_names=None, **kwargs):
+ **kwargs):
super().__init__(symbol, **kwargs)
self._invoke = None
- # Populate the Schedule Symbol Table with the reserved names.
- if reserved_names:
- for reserved in reserved_names:
- self.symbol_table.add(Symbol(reserved))
-
# We need to separate calls into loops (an iteration space really)
# and calls so that we can perform optimisations separately on the
# two entities.
@@ -712,14 +690,6 @@ def __init__(self, symbol, KernFactory, BuiltInFactory, alg_calls=None,
else:
self.addchild(KernFactory.create(call, parent=self))
- @property
- def symbol_table(self):
- '''
- :returns: Table containing symbol information for the schedule.
- :rtype: :py:class:`psyclone.psyir.symbols.SymbolTable`
- '''
- return self._symbol_table
-
@property
def invoke(self):
return self._invoke
@@ -747,33 +717,6 @@ def __str__(self):
result += "End " + self.coloured_name(False) + "\n"
return result
- def gen_code(self, parent):
- '''
- Generate the Nodes in the f2pygen AST for this schedule.
-
- :param parent: the parent Node (i.e. the enclosing subroutine) to \
- which to add content.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
- '''
- # Imported symbols promoted from Kernel imports are in the SymbolTable.
- # First aggregate all variables imported from the same module in a map.
- module_map = {}
- for imported_var in self.symbol_table.imported_symbols:
- module_name = imported_var.interface.container_symbol.name
- if module_name in module_map:
- module_map[module_name].append(imported_var.name)
- else:
- module_map[module_name] = [imported_var.name]
-
- # Then we can produce the UseGen statements without repeating modules
- for module_name, var_list in module_map.items():
- parent.add(UseGen(parent, name=module_name, only=True,
- funcnames=var_list))
-
- for entity in self.children:
- entity.gen_code(parent)
-
class GlobalSum(Statement):
'''
@@ -873,12 +816,6 @@ def __init__(self, field, check_dirty=True,
self._halo_depth = None
self._check_dirty = check_dirty
self._vector_index = vector_index
- # Keep a reference to the SymbolTable associated with the
- # InvokeSchedule.
- self._symbol_table = None
- isched = self.ancestor(InvokeSchedule)
- if isched:
- self._symbol_table = isched.symbol_table
@property
def vector_index(self):
@@ -1135,26 +1072,26 @@ def local_reduction_name(self):
# with the PSy-layer generation or relevant transformation.
return "l_" + self.reduction_arg.name
- def zero_reduction_variable(self, parent, position=None):
+ def zero_reduction_variable(self):
'''
Generate code to zero the reduction variable and to zero the local
reduction variable if one exists. The latter is used for reproducible
reductions, if specified.
- :param parent: the Node in the AST to which to add new code.
- :type parent: :py:class:`psyclone.psyir.nodes.Node`
- :param str position: where to position the new code in the AST.
+ TODO #514: This is only used by LFRic, but should be generalised,
+ ideally in psyir.nodes.omp/acc_directives
:raises GenerationError: if the variable to zero is not a scalar.
- :raises GenerationError: if the reprod_pad_size (read from the \
- configuration file) is less than 1.
- :raises GenerationError: for a reduction into a scalar that is \
- neither 'real' nor 'integer'.
+ :raises GenerationError: if the reprod_pad_size (read from the
+ configuration file) is less than 1.
+ :raises GenerationError: for a reduction into a scalar that is
+ neither 'real' nor 'integer'.
'''
- if not position:
- position = ["auto"]
- var_name = self._reduction_arg.name
+ # pylint: disable-next=import-outside-toplevel
+ from psyclone.domain.common.psylayer import PSyLoop
+
+ variable_name = self._reduction_arg.name
local_var_name = self.local_reduction_name
var_arg = self._reduction_arg
# Check for a non-scalar argument
@@ -1165,66 +1102,72 @@ def zero_reduction_variable(self, parent, position=None):
# Generate the reduction variable
var_data_type = var_arg.intrinsic_type
if var_data_type == "real":
- data_value = "0.0"
+ data_type = REAL_TYPE
elif var_data_type == "integer":
- data_value = "0"
+ data_type = INTEGER_TYPE
else:
raise GenerationError(
f"Kern.zero_reduction_variable() should be either a 'real' or "
f"an 'integer' scalar but found scalar of type "
f"'{var_arg.intrinsic_type}'.")
- # Retrieve the precision information (if set) and append it
- # to the initial reduction value
- if var_arg.precision:
- kind_type = var_arg.precision
- zero_sum_variable = "_".join([data_value, kind_type])
- else:
- kind_type = ""
- zero_sum_variable = data_value
- parent.add(AssignGen(parent, lhs=var_name, rhs=zero_sum_variable),
- position=position)
+
+ # Retrieve the variable and precision information
+ kind_str = f"(kind={var_arg.precision})" if var_arg.precision else ""
+ variable = self.scope.symbol_table.lookup(variable_name)
+ insert_loc = self.ancestor(PSyLoop)
+ # If it has ancestor directive keep going up
+ while isinstance(insert_loc.parent.parent, Directive):
+ insert_loc = insert_loc.parent.parent
+ cursor = insert_loc.position
+ insert_loc = insert_loc.parent
+ new_node = Assignment.create(
+ lhs=Reference(variable),
+ rhs=Literal("0", data_type))
+ insert_loc.addchild(new_node, cursor)
+ cursor += 1
+
if self.reprod_reduction:
- parent.add(DeclGen(parent, datatype=var_data_type,
- entity_decls=[local_var_name],
- allocatable=True, kind=kind_type,
- dimension=":,:"))
+ local_var = self.scope.symbol_table.find_or_create_tag(
+ local_var_name, symbol_type=DataSymbol,
+ datatype=UnsupportedFortranType(
+ f"{var_data_type}{kind_str}, allocatable, "
+ f"dimension(:,:) :: {local_var_name}"
+ ))
nthreads = \
- self.scope.symbol_table.lookup_with_tag("omp_num_threads").name
+ self.scope.symbol_table.lookup_with_tag("omp_num_threads")
if Config.get().reprod_pad_size < 1:
raise GenerationError(
f"REPROD_PAD_SIZE in {Config.get().filename} should be a "
f"positive integer, but it is set to "
f"'{Config.get().reprod_pad_size}'.")
- pad_size = str(Config.get().reprod_pad_size)
- parent.add(AllocateGen(parent, local_var_name + "(" + pad_size +
- "," + nthreads + ")"), position=position)
- parent.add(AssignGen(parent, lhs=local_var_name,
- rhs=zero_sum_variable), position=position)
-
- def reduction_sum_loop(self, parent):
+ pad_size = Literal(str(Config.get().reprod_pad_size), INTEGER_TYPE)
+ alloc = IntrinsicCall.create(
+ IntrinsicCall.Intrinsic.ALLOCATE,
+ [ArrayReference.create(local_var,
+ [pad_size, Reference(nthreads)])])
+ insert_loc.addchild(alloc, cursor)
+ cursor += 1
+
+ assign = Assignment.create(
+ lhs=Reference(local_var),
+ rhs=Literal("0", data_type)
+ )
+ insert_loc.addchild(assign, cursor)
+ return new_node
+
+ def reduction_sum_loop(self):
'''
Generate the appropriate code to place after the end parallel
region.
- :param parent: the Node in the f2pygen AST to which to add new code.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
:raises GenerationError: for an unsupported reduction access in \
LFRicBuiltIn.
-
'''
var_name = self._reduction_arg.name
local_var_name = self.local_reduction_name
- # A non-reproducible reduction requires a single-valued argument
- local_var_ref = self._reduction_reference().name
- # A reproducible reduction requires multi-valued argument stored
- # as a padded array separately for each thread
- if self.reprod_reduction:
- local_var_ref = FortranWriter().arrayreference_node(
- self._reduction_reference())
reduction_access = self._reduction_arg.access
try:
- reduction_operator = REDUCTION_OPERATOR_MAPPING[reduction_access]
+ _ = REDUCTION_OPERATOR_MAPPING[reduction_access]
except KeyError as err:
api_strings = [access.api_specific_name()
for access in REDUCTION_OPERATOR_MAPPING]
@@ -1234,13 +1177,40 @@ def reduction_sum_loop(self, parent):
f"LFRicBuiltIn:reduction_sum_loop(). Expected one of "
f"{api_strings}.") from err
symtab = self.scope.symbol_table
- thread_idx = symtab.lookup_with_tag("omp_thread_index").name
- nthreads = symtab.lookup_with_tag("omp_num_threads").name
- do_loop = DoGen(parent, thread_idx, "1", nthreads)
- do_loop.add(AssignGen(do_loop, lhs=var_name, rhs=var_name +
- reduction_operator + local_var_ref))
- parent.add(do_loop)
- parent.add(DeallocateGen(parent, local_var_name))
+ thread_idx = symtab.find_or_create_tag(
+ "omp_thread_index",
+ root_name="th_idx",
+ symbol_type=DataSymbol,
+ datatype=INTEGER_TYPE)
+ nthreads = symtab.find_or_create_tag(
+ "omp_num_threads",
+ root_name="nthreads",
+ symbol_type=DataSymbol,
+ datatype=INTEGER_TYPE)
+ do_loop = Loop.create(
+ thread_idx,
+ start=Literal("1", INTEGER_TYPE),
+ stop=Reference(nthreads),
+ step=Literal("1", INTEGER_TYPE),
+ children=[])
+ directive = self.ancestor(OMPParallelDirective)
+ directive.parent.addchild(do_loop, directive.position+1)
+ var_symbol = self.scope.symbol_table.lookup(var_name)
+ local_symbol = self.scope.symbol_table.lookup(local_var_name)
+ do_loop.loop_body.addchild(Assignment.create(
+ lhs=Reference(var_symbol),
+ rhs=BinaryOperation.create(
+ BinaryOperation.Operator.ADD,
+ Reference(var_symbol),
+ ArrayReference.create(local_symbol,
+ [Literal("1", INTEGER_TYPE),
+ Reference(thread_idx)]))))
+ do_loop.append_preceding_comment(
+ "sum the partial results sequentially")
+ do_loop.parent.addchild(
+ IntrinsicCall.create(IntrinsicCall.Intrinsic.DEALLOCATE,
+ [Reference(local_symbol)]),
+ do_loop.position+1)
def _reduction_reference(self):
'''
@@ -1323,9 +1293,6 @@ def is_coloured(self):
def iterates_over(self):
return self._iterates_over
- def gen_code(self, parent):
- raise NotImplementedError("Kern.gen_code should be implemented")
-
class CodedKern(Kern):
'''
@@ -1464,7 +1431,7 @@ def module_inline(self, value):
f"'True' since module-inlining is irreversible. But found:"
f" '{value}'.")
# Do the same to all kernels in this invoke with the same name.
- # This is needed because gen_code/lowering would otherwise add
+ # This is needed because lowering would otherwise add
# an import with the same name and shadow the module-inline routine
# symbol.
# TODO 1823: The transformation could have more control about this by
@@ -1752,13 +1719,18 @@ def _rename_psyir(self, suffix):
container_table = container.symbol_table
for sym in container_table.datatypesymbols:
if isinstance(sym.datatype, UnsupportedFortranType):
- new_declaration = sym.datatype.declaration.replace(
- orig_kern_name, new_kern_name)
- # pylint: disable=protected-access
- sym._datatype = UnsupportedFortranType(
- new_declaration,
- partial_datatype=sym.datatype.partial_datatype)
- # pylint: enable=protected-access
+ # If the DataTypeSymbol is a KernelMetadata Type, change its
+ # kernel code name
+ for line in sym.datatype.declaration.split('\n'):
+ if "PROCEDURE," in line:
+ newl = f"PROCEDURE, NOPASS :: code => {new_kern_name}"
+ new_declaration = sym.datatype.declaration.replace(
+ line, newl)
+ # pylint: disable=protected-access
+ sym._datatype = UnsupportedFortranType(
+ new_declaration,
+ partial_datatype=sym.datatype.partial_datatype)
+ break # There is only one such statement per type
@property
def modified(self):
@@ -1964,9 +1936,6 @@ def __init__(self, arg):
'''
# the `psyclone.psyGen.Argument` we are concerned with
self._arg = arg
- # The call (Kern, HaloExchange, GlobalSum or subclass)
- # instance with which the argument is associated
- self._call = arg.call
# initialise _covered and _vector_index_access to keep pylint
# happy
self._covered = None
@@ -1995,7 +1964,7 @@ def overlaps(self, arg):
# the arguments are different args so do not overlap
return False
- if isinstance(self._call, HaloExchange) and \
+ if isinstance(self._arg._call, HaloExchange) and \
isinstance(arg.call, HaloExchange) and \
(self._arg.vector_size > 1 or arg.vector_size > 1):
# This is a vector field and both accesses come from halo
@@ -2009,7 +1978,7 @@ def overlaps(self, arg):
f"DataAccess.overlaps(): vector sizes differ for field "
f"'{arg.name}' in two halo exchange calls. Found "
f"'{self._arg.vector_size}' and '{arg.vector_size}'")
- if self._call.vector_index != arg.call.vector_index:
+ if self._arg._call.vector_index != arg.call.vector_index:
# accesses are to different vector indices so do not overlap
return False
# accesses do overlap
@@ -2054,7 +2023,7 @@ def update_coverage(self, arg):
# halo exchange and therefore only accesses one of the
# vectors
- if isinstance(self._call, HaloExchange):
+ if isinstance(self._arg._call, HaloExchange):
# I am also a halo exchange so only access one of the
# vectors. At this point the vector indices of the two
# halo exchange fields must be the same, which should
@@ -2181,6 +2150,16 @@ def _complete_init(self, arg_info):
if hasattr(self, 'vector_size') and self.vector_size > 1:
data_type = ArrayType(data_type, [self.vector_size])
+ # Symbol imports for STENCILS are not yet in the symbol
+ # table (until lowering time), so make sure the argument
+ # names do not overlap with them
+ # pylint: disable=import-outside-toplevel
+ from psyclone.domain.lfric.lfric_constants import \
+ LFRicConstants
+ const = LFRicConstants()
+ if self._orig_name.upper() in const.STENCIL_MAPPING.values():
+ self._orig_name = self._orig_name + "_arg"
+
new_argument = symtab.find_or_create_tag(
tag, root_name=self._orig_name, symbol_type=DataSymbol,
datatype=data_type,
diff --git a/src/psyclone/psyad/transformations/assignment_trans.py b/src/psyclone/psyad/transformations/assignment_trans.py
index 9bf864b87a..59b918dad1 100644
--- a/src/psyclone/psyad/transformations/assignment_trans.py
+++ b/src/psyclone/psyad/transformations/assignment_trans.py
@@ -37,7 +37,6 @@
assignment node with its adjoint form.
'''
-from __future__ import absolute_import
from psyclone.core import SymbolicMaths
from psyclone.psyir.nodes import BinaryOperation, Assignment, Reference, \
diff --git a/src/psyclone/psyir/backend/fortran.py b/src/psyclone/psyir/backend/fortran.py
index 284de19576..eee40e1d3b 100644
--- a/src/psyclone/psyir/backend/fortran.py
+++ b/src/psyclone/psyir/backend/fortran.py
@@ -41,7 +41,7 @@
# pylint: disable=too-many-lines
from psyclone.core import Signature
-from psyclone.errors import GenerationError, InternalError
+from psyclone.errors import InternalError
from psyclone.psyir.backend.language_writer import LanguageWriter
from psyclone.psyir.backend.visitor import VisitorError
from psyclone.psyir.frontend.fparser2 import (
@@ -54,7 +54,7 @@
GenericInterfaceSymbol, IntrinsicSymbol, PreprocessorInterface,
RoutineSymbol, ScalarType, StructureType, Symbol, SymbolTable,
UnresolvedInterface, UnresolvedType, UnsupportedFortranType,
- UnsupportedType, )
+ UnsupportedType, TypedSymbol)
# Mapping from PSyIR types to Fortran data types. Simply reverse the
@@ -500,6 +500,7 @@ def gen_vardecl(self, symbol, include_visibility=False):
:returns: the Fortran variable declaration as a string.
:rtype: str
+ :raises VisitorError: if the symbol is not typed.
:raises VisitorError: if the symbol is of UnresolvedType.
:raises VisitorError: if the symbol is of UnsupportedType other than
UnsupportedFortranType.
@@ -516,6 +517,9 @@ def gen_vardecl(self, symbol, include_visibility=False):
'''
# pylint: disable=too-many-branches
+ if not isinstance(symbol, (TypedSymbol, StructureType.ComponentType)):
+ raise VisitorError(f"Symbol '{symbol.name}' must be a symbol with"
+ f" a datatype in order to use 'gen_vardecl'.")
if isinstance(symbol.datatype, UnresolvedType):
raise VisitorError(f"Symbol '{symbol.name}' has a UnresolvedType "
f"and we can not generate a declaration for "
@@ -1492,15 +1496,7 @@ def loop_node(self, node):
body += self._visit(child)
self._depth -= 1
- # A generation error is raised if variable is not defined. This
- # happens in LFRic kernel that iterate over a domain.
- try:
- variable_name = node.variable.name
- except GenerationError:
- # If a kernel iterates over a domain - there is
- # no loop. But the loop node is maintained since it handles halo
- # exchanges. So just return the body in this case
- return body
+ variable_name = node.variable.name
return (
f"{self._nindent}do {variable_name} = {start}, {stop}, {step}\n"
diff --git a/src/psyclone/psyir/backend/visitor.py b/src/psyclone/psyir/backend/visitor.py
index fd93a5403c..46c6febbfd 100644
--- a/src/psyclone/psyir/backend/visitor.py
+++ b/src/psyclone/psyir/backend/visitor.py
@@ -273,6 +273,10 @@ def _visit(self, node):
if not parent or valid:
if node.preceding_comment and self._COMMENT_PREFIX:
lines = node.preceding_comment.split('\n')
+ # For better readability separate with a linebreak
+ # any comment that is not at the top of their scope
+ if node.position != 0:
+ result += "\n"
for line in lines:
result += (self._nindent +
self._COMMENT_PREFIX +
diff --git a/src/psyclone/psyir/nodes/acc_directives.py b/src/psyclone/psyir/nodes/acc_directives.py
index 868e66a584..5099e37316 100644
--- a/src/psyclone/psyir/nodes/acc_directives.py
+++ b/src/psyclone/psyir/nodes/acc_directives.py
@@ -45,8 +45,7 @@
import abc
from psyclone.core import Signature
-from psyclone.f2pygen import DirectiveGen, CommentGen
-from psyclone.errors import GenerationError, InternalError
+from psyclone.errors import GenerationError
from psyclone.psyir.nodes.acc_clauses import (ACCCopyClause, ACCCopyInClause,
ACCCopyOutClause)
from psyclone.psyir.nodes.assignment import Assignment
@@ -188,20 +187,6 @@ def parallelism(self, value):
f"of parallelism but got '{value}'")
self._parallelism = value.lower()
- def gen_code(self, parent):
- '''Generate the Fortran ACC Routine Directive and any associated code.
-
- :param parent: the parent Node in the Schedule to which to add our
- content.
- :type parent: sub-class of :py:class:`psyclone.f2pygen.BaseGen`
- '''
- # Check the constraints are correct
- self.validate_global_constraints()
-
- # Generate the code for this Directive
- parent.add(DirectiveGen(parent, "acc", "begin", "routine",
- f"{self.parallelism}"))
-
def begin_string(self):
'''Returns the beginning statement of this directive, i.e.
"acc routine". The visitor is responsible for adding the
@@ -232,33 +217,6 @@ def __init__(self, children=None, parent=None):
self._sig_set = set()
- def gen_code(self, parent):
- '''Generate the elements of the f2pygen AST for this Node in the
- Schedule.
-
- :param parent: node in the f2pygen AST to which to add node(s).
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
-
- :raises GenerationError: if no data is found to copy in.
-
- '''
- self.validate_global_constraints()
- self.lower_to_language_level()
- # Leverage begin_string() to raise an exception if there are no
- # variables to copyin but discard the generated string since it is
- # incompatible with class DirectiveGen() we are using below.
- self.begin_string()
-
- # Add the enter data directive.
- sym_list = _sig_set_to_string(self._sig_set)
- copy_in_str = f"copyin({sym_list})"
- parent.add(DirectiveGen(parent, "acc", "begin", "enter data",
- copy_in_str))
- # Call an API-specific subclass of this class in case
- # additional declarations are required.
- self.data_on_device(parent)
- parent.add(CommentGen(parent, ""))
-
def lower_to_language_level(self):
'''
In-place replacement of this directive concept into language level
@@ -337,26 +295,6 @@ def __init__(self, default_present=True, **kwargs):
super().__init__(**kwargs)
self.default_present = default_present
- def gen_code(self, parent):
- '''
- Generate the elements of the f2pygen AST for this Node in the Schedule.
-
- :param parent: node in the f2pygen AST to which to add node(s).
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
-
- '''
- self.validate_global_constraints()
-
- parent.add(DirectiveGen(parent, "acc", "begin",
- *self.begin_string().split()[1:]))
-
- for child in self.children:
- child.gen_code(parent)
-
- parent.add(DirectiveGen(parent, *self.end_string().split()))
-
- self.gen_post_region_code(parent)
-
def begin_string(self):
'''
Returns the beginning statement of this directive, i.e.
@@ -596,28 +534,6 @@ def validate_global_constraints(self):
super().validate_global_constraints()
- def gen_code(self, parent):
- '''
- Generate the f2pygen AST entries in the Schedule for this OpenACC
- loop directive.
-
- :param parent: the parent Node in the Schedule to which to add our
- content.
- :type parent: sub-class of :py:class:`psyclone.f2pygen.BaseGen`
- :raises GenerationError: if this "!$acc loop" is not enclosed within \
- an ACC Parallel region.
- '''
- self.validate_global_constraints()
-
- # Add any clauses to the directive. We use self.begin_string() to avoid
- # code duplication.
- options_str = self.begin_string(leading_acc=False)
-
- parent.add(DirectiveGen(parent, "acc", "begin", "loop", options_str))
-
- for child in self.children:
- child.gen_code(parent)
-
def begin_string(self, leading_acc=True):
''' Returns the opening statement of this directive, i.e.
"acc loop" plus any qualifiers. If `leading_acc` is False then
@@ -701,29 +617,6 @@ def default_present(self):
'''
return self._default_present
- def gen_code(self, parent):
- '''
- Generate the f2pygen AST entries in the Schedule for this
- OpenACC Kernels directive.
-
- :param parent: the parent Node in the Schedule to which to add this \
- content.
- :type parent: sub-class of :py:class:`psyclone.f2pygen.BaseGen`
-
- '''
- self.validate_global_constraints()
-
- # We re-use the 'begin_string' method but must skip the leading 'acc'
- # that it includes.
- parent.add(DirectiveGen(parent, "acc", "begin",
- *self.begin_string().split()[1:]))
- for child in self.children:
- child.gen_code(parent)
-
- parent.add(DirectiveGen(parent, *self.end_string().split()))
-
- self.gen_post_region_code(parent)
-
def begin_string(self):
'''Returns the beginning statement of this directive, i.e.
"acc kernels ...". The backend is responsible for adding the
@@ -757,16 +650,6 @@ class ACCDataDirective(ACCRegionDirective):
in the PSyIR.
'''
- def gen_code(self, _):
- '''
- :raises InternalError: the ACC data directive is currently only \
- supported for the NEMO API and that uses the \
- PSyIR backend to generate code.
- fparser2 parse tree.
-
- '''
- raise InternalError(
- "ACCDataDirective.gen_code should not have been called.")
@staticmethod
def _validate_child(position, child):
diff --git a/src/psyclone/psyir/nodes/array_member.py b/src/psyclone/psyir/nodes/array_member.py
index 530faa643c..7384a9ecdb 100644
--- a/src/psyclone/psyir/nodes/array_member.py
+++ b/src/psyclone/psyir/nodes/array_member.py
@@ -36,7 +36,6 @@
''' This module contains the implementation of the ArrayMember node.'''
-from __future__ import absolute_import
from psyclone.psyir.nodes.member import Member
from psyclone.psyir.nodes.array_mixin import ArrayMixin
from psyclone.errors import GenerationError
diff --git a/src/psyclone/psyir/nodes/array_of_structures_member.py b/src/psyclone/psyir/nodes/array_of_structures_member.py
index 051d90d307..f096015ba0 100644
--- a/src/psyclone/psyir/nodes/array_of_structures_member.py
+++ b/src/psyclone/psyir/nodes/array_of_structures_member.py
@@ -38,7 +38,6 @@
''' This module contains the implementation of the ArrayOfStructuresMember
node.'''
-from __future__ import absolute_import
from psyclone.psyir.nodes.structure_member import StructureMember
from psyclone.psyir.nodes.array_of_structures_mixin import \
ArrayOfStructuresMixin
diff --git a/src/psyclone/psyir/nodes/assignment.py b/src/psyclone/psyir/nodes/assignment.py
index 2be5b9d1fa..f238380c24 100644
--- a/src/psyclone/psyir/nodes/assignment.py
+++ b/src/psyclone/psyir/nodes/assignment.py
@@ -41,7 +41,6 @@
from psyclone.core import VariablesAccessInfo
from psyclone.errors import InternalError
-from psyclone.f2pygen import PSyIRGen
from psyclone.psyir.nodes.literal import Literal
from psyclone.psyir.nodes.array_reference import ArrayReference
from psyclone.psyir.nodes.datanode import DataNode
@@ -255,11 +254,3 @@ def is_literal_assignment(self):
'''
return isinstance(self.rhs, Literal)
-
- def gen_code(self, parent):
- '''F2pygen code generation of an Assignment.
-
- :param parent: the parent of this Node in the PSyIR.
- :type parent: :py:class:`psyclone.psyir.nodes.Node`
- '''
- parent.add(PSyIRGen(parent, self))
diff --git a/src/psyclone/psyir/nodes/directive.py b/src/psyclone/psyir/nodes/directive.py
index 32ce89b27d..b15da8ada7 100644
--- a/src/psyclone/psyir/nodes/directive.py
+++ b/src/psyclone/psyir/nodes/directive.py
@@ -44,13 +44,10 @@
import abc
from collections import OrderedDict
-from psyclone.configuration import Config
from psyclone.core import Signature, VariablesAccessInfo
from psyclone.errors import InternalError
-from psyclone.f2pygen import CommentGen
from psyclone.psyir.nodes.array_of_structures_reference import (
ArrayOfStructuresReference)
-from psyclone.psyir.nodes.loop import Loop
from psyclone.psyir.nodes.reference import Reference
from psyclone.psyir.nodes.schedule import Schedule
from psyclone.psyir.nodes.statement import Statement
@@ -244,45 +241,6 @@ def clauses(self):
return self.children[1:]
return []
- def gen_post_region_code(self, parent):
- '''
- Generates any code that must be executed immediately after the end of
- the region defined by this directive.
-
- TODO #1648 this method is only used by the gen_code() code-generation
- path and should be replaced by functionality in a
- 'lower_to_language_level' method in an LFRic-specific subclass
- of the appropriate directive.
-
- :param parent: where to add new f2pygen nodes.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
-
- '''
- if not Config.get().distributed_memory or self.ancestor(Loop):
- return
- # Have to import PSyLoop here to avoid a circular dependence.
- # pylint: disable=import-outside-toplevel
- from psyclone.domain.common.psylayer import PSyLoop
-
- commented = False
- for loop in self.walk(PSyLoop):
- if not isinstance(loop.parent, Loop):
- if not commented and loop.unique_modified_args("gh_field"):
- commented = True
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent,
- " Set halos dirty/clean for fields "
- "modified in the above loop(s)"))
- parent.add(CommentGen(parent, ""))
- loop.gen_mark_halos_clean_dirty(parent)
-
- if commented:
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent,
- " End of set dirty/clean section for "
- "above loop(s)"))
- parent.add(CommentGen(parent, ""))
-
class StandaloneDirective(Directive):
'''
diff --git a/src/psyclone/psyir/nodes/extract_node.py b/src/psyclone/psyir/nodes/extract_node.py
index 26c6c59b1f..23b3925989 100644
--- a/src/psyclone/psyir/nodes/extract_node.py
+++ b/src/psyclone/psyir/nodes/extract_node.py
@@ -49,7 +49,6 @@
be added in Issue #298.
'''
-from psyclone.f2pygen import CommentGen
from psyclone.psyir.nodes.psy_data_node import PSyDataNode
@@ -145,46 +144,6 @@ def post_name(self):
'''
return self._post_name
- def gen_code(self, parent):
- # pylint: disable=arguments-differ
- '''
- Generates the code required for extraction of one or more Nodes.
- It uses the PSyData API (via the base class PSyDataNode) to create
- the required callbacks that will allow a library to write the
- kernel data to a file.
-
- :param parent: the parent of this Node in the PSyIR.
- :type parent: :py:class:`psyclone.psyir.nodes.Node`.
-
- '''
- if self._read_write_info is None:
- # Typically, _read_write_info should be set at the constructor,
- # but some tests do not provide the required information. To
- # support these tests, allow creation of the read_write info here.
- # We cannot do this in the constructor, since at construction
- # time of this node it is not yet part of the PSyIR tree, so it
- # does not have children from which we can collect the input/output
- # parameters.
-
- # Avoid circular dependency
- # pylint: disable=import-outside-toplevel
- from psyclone.psyir.tools.call_tree_utils import CallTreeUtils
- # Determine the variables to write:
- ctu = CallTreeUtils()
- self._read_write_info = ctu.get_in_out_parameters(self)
-
- options = {'pre_var_list': self._read_write_info.read_list,
- 'post_var_list': self._read_write_info.write_list,
- 'post_var_postfix': self._post_name}
-
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " ExtractStart"))
- parent.add(CommentGen(parent, ""))
- super().gen_code(parent, options)
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " ExtractEnd"))
- parent.add(CommentGen(parent, ""))
-
def lower_to_language_level(self):
# pylint: disable=arguments-differ
'''
diff --git a/src/psyclone/psyir/nodes/literal.py b/src/psyclone/psyir/nodes/literal.py
index d710df5596..8094ed8f06 100644
--- a/src/psyclone/psyir/nodes/literal.py
+++ b/src/psyclone/psyir/nodes/literal.py
@@ -80,7 +80,7 @@ class Literal(DataNode):
_text_name = "Literal"
_colour = "yellow"
_real_value = r'^[+-]?[0-9]+(\.[0-9]*)?([eE][+-]?[0-9]+)?$'
- _int_value = r'(([+-]?[0-9]+)|(NOT_INITIALISED))'
+ _int_value = r'([+-]?[0-9]+)'
def __init__(self, value, datatype, parent=None):
super().__init__(parent=parent)
diff --git a/src/psyclone/psyir/nodes/loop.py b/src/psyclone/psyir/nodes/loop.py
index 0681e3e764..e048cb80e6 100644
--- a/src/psyclone/psyir/nodes/loop.py
+++ b/src/psyclone/psyir/nodes/loop.py
@@ -46,7 +46,6 @@
from psyclone.psyir.symbols import ScalarType, DataSymbol
from psyclone.core import AccessType, Signature
from psyclone.errors import InternalError, GenerationError
-from psyclone.f2pygen import DeclGen, PSyIRGen, UseGen
class Loop(Statement):
@@ -538,65 +537,3 @@ def independent_iterations(self,
return dtools.can_loop_be_parallelised(
self, test_all_variables=test_all_variables,
signatures_to_ignore=signatures_to_ignore)
-
- def gen_code(self, parent):
- '''
- Generate the Fortran Loop and any associated code.
-
- :param parent: the node in the f2pygen AST to which to add content.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
-
- '''
- # Avoid circular dependency
- # pylint: disable=import-outside-toplevel
- from psyclone.psyGen import zero_reduction_variables
-
- if not self.is_openmp_parallel():
- calls = self.reductions()
- zero_reduction_variables(calls, parent)
-
- # TODO #1010: The Fortran backend operates on a copy of the node so
- # that the lowering changes are not reflected in the provided node.
- # This is the correct behaviour but it means that the lowering changes
- # to ancestors will be lost here because the ancestors use gen_code
- # instead of lowering+backend.
- # So we need to do the "rename_and_write" here for the invoke symbol
- # table to be updated.
- from psyclone.psyGen import CodedKern
- for kernel in self.walk(CodedKern):
- if not kernel.module_inline:
- if kernel.modified:
- kernel.rename_and_write()
-
- # Use the Fortran Backend from this point
- parent.add(PSyIRGen(parent, self))
-
- # TODO #1010: The Fortran backend operates on a copy of the node so
- # that the lowering changes are not reflected in the provided node.
- # This is the correct behaviour but it means that the lowering changes
- # to ancestors will be lost here because the ancestors use gen_code
- # instead of lowering+backend.
- # Therefore we need to replicate the lowering ancestor changes
- # manually here (all this can be removed when the invoke schedule also
- # uses the lowering+backend), these are:
- # - Declaring the loop variable symbols
- for loop in self.walk(Loop):
- # pylint: disable=protected-access
- if loop._variable is None:
- # This is the dummy iteration variable
- name = "dummy"
- kind_gen = None
- else:
- name = loop.variable.name
- kind = loop.variable.datatype.precision.name
- kind_gen = None if kind == "UNDEFINED" else kind
- my_decl = DeclGen(parent, datatype="integer",
- kind=kind_gen,
- entity_decls=[name])
- parent.add(my_decl)
-
- # - Add the kernel module import statements
- for kernel in self.walk(CodedKern):
- if not kernel.module_inline:
- parent.add(UseGen(parent, name=kernel.module_name, only=True,
- funcnames=[kernel.name]))
diff --git a/src/psyclone/psyir/nodes/omp_directives.py b/src/psyclone/psyir/nodes/omp_directives.py
index d5f9538cc6..2ad751391b 100644
--- a/src/psyclone/psyir/nodes/omp_directives.py
+++ b/src/psyclone/psyir/nodes/omp_directives.py
@@ -51,8 +51,6 @@
from psyclone.core import AccessType, VariablesAccessInfo
from psyclone.errors import (GenerationError,
UnresolvedDependencyError)
-from psyclone.f2pygen import (AssignGen, UseGen, DeclGen, DirectiveGen,
- CommentGen)
from psyclone.psyir.nodes.array_mixin import ArrayMixin
from psyclone.psyir.nodes.array_reference import ArrayReference
from psyclone.psyir.nodes.assignment import Assignment
@@ -74,7 +72,9 @@
from psyclone.psyir.nodes.schedule import Schedule
from psyclone.psyir.nodes.structure_reference import StructureReference
from psyclone.psyir.nodes.while_loop import WhileLoop
-from psyclone.psyir.symbols import INTEGER_TYPE, ScalarType, DataSymbol
+from psyclone.psyir.symbols import (
+ INTEGER_TYPE, ScalarType, DataSymbol, ImportInterface, ContainerSymbol,
+ RoutineSymbol)
# OMP_OPERATOR_MAPPING is used to determine the operator to use in the
# reduction clause of an OpenMP directive.
@@ -126,6 +126,10 @@ def _get_reductions_list(self, reduction_type):
const = Config.get().api_conf().get_constants()
for call in self.kernels():
for arg in call.arguments.args:
+ if call.reprod_reduction:
+ # In this case we do the reduction serially instead of
+ # using an OpenMP clause
+ continue
if arg.argument_type in const.VALID_SCALAR_NAMES:
if arg.descriptor.access == reduction_type:
if arg.name not in result:
@@ -143,20 +147,6 @@ class OMPDeclareTargetDirective(OMPStandaloneDirective):
Class representing an OpenMP Declare Target directive in the PSyIR.
'''
- def gen_code(self, parent):
- '''Generate the fortran OMP Declare Target Directive and any
- associated code.
-
- :param parent: the parent Node in the Schedule to which to add our
- content.
- :type parent: sub-class of :py:class:`psyclone.f2pygen.BaseGen`
- '''
- # Check the constraints are correct
- self.validate_global_constraints()
-
- # Generate the code for this Directive
- parent.add(DirectiveGen(parent, "omp", "begin", "declare", "target"))
-
def begin_string(self):
'''Returns the beginning statement of this directive, i.e.
"omp routine". The visitor is responsible for adding the
@@ -214,21 +204,6 @@ def validate_global_constraints(self):
super().validate_global_constraints()
- def gen_code(self, parent):
- '''Generate the fortran OMP Taskwait Directive and any associated
- code
-
- :param parent: the parent Node in the Schedule to which to add our
- content.
- :type parent: sub-class of :py:class:`psyclone.f2pygen.BaseGen`
- '''
- # Check the constraints are correct
- self.validate_global_constraints()
-
- # Generate the code for this Directive
- parent.add(DirectiveGen(parent, "omp", "begin", "taskwait", ""))
- # No children or end code for this node
-
def begin_string(self):
'''Returns the beginning statement of this directive, i.e.
"omp taskwait". The visitor is responsible for adding the
@@ -1166,32 +1141,6 @@ def nowait(self):
'''
return self._nowait
- def gen_code(self, parent):
- '''Generate the fortran OMP Single Directive and any associated
- code
-
- :param parent: the parent Node in the Schedule to which to add our
- content.
- :type parent: sub-class of :py:class:`psyclone.f2pygen.BaseGen`
- '''
- # Check the constraints are correct
- self.validate_global_constraints()
-
- # Capture the nowait section of the string if required
- nowait_string = ""
- if self._nowait:
- nowait_string = "nowait"
-
- parent.add(DirectiveGen(parent, "omp", "begin", "single",
- nowait_string))
-
- # Generate the code for all of this node's children
- for child in self.dir_body:
- child.gen_code(parent)
-
- # Generate the end code for this node
- parent.add(DirectiveGen(parent, "omp", "end", "single", ""))
-
def begin_string(self):
'''Returns the beginning statement of this directive, i.e.
"omp single". The visitor is responsible for adding the
@@ -1224,27 +1173,6 @@ class OMPMasterDirective(OMPSerialDirective):
# Textual description of the node
_text_name = "OMPMasterDirective"
- def gen_code(self, parent):
- '''Generate the Fortran OMP Master Directive and any associated
- code
-
- :param parent: the parent Node in the Schedule to which to add our
- content.
- :type parent: sub-class of :py:class:`psyclone.f2pygen.BaseGen`
- '''
-
- # Check the constraints are correct
- self.validate_global_constraints()
-
- parent.add(DirectiveGen(parent, "omp", "begin", "master", ""))
-
- # Generate the code for all of this node's children
- for child in self.children:
- child.gen_code(parent)
-
- # Generate the end code for this node
- parent.add(DirectiveGen(parent, "omp", "end", "master", ""))
-
def begin_string(self):
'''Returns the beginning statement of this directive, i.e.
"omp master". The visitor is responsible for adding the
@@ -1339,60 +1267,29 @@ def private_clause(self):
'''
return self.children[2]
- def gen_code(self, parent):
- '''Generate the fortran OMP Parallel Directive and any associated
- code.
+ def lower_to_language_level(self):
+ '''
+ In-place construction of clauses as PSyIR constructs.
+ At the higher level these clauses rely on dynamic variable dependence
+ logic to decide what is private and what is shared, so we use this
+ lowering step to find out which References are private, and place them
+ explicitly in the lower-level tree to be processed by the backend
+ visitor.
- :param parent: the node in the generated AST to which to add content.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
+ :returns: the lowered version of this node.
+ :rtype: :py:class:`psyclone.psyir.node.Node`
:raises GenerationError: if the OpenMP directive needs some
synchronisation mechanism to create valid
code. These are not implemented yet.
-
'''
- # pylint: disable=import-outside-toplevel
- from psyclone.psyGen import zero_reduction_variables
-
- # We're not doing nested parallelism so make sure that this
- # omp parallel region is not already within some parallel region
- self.validate_global_constraints()
-
- # Check that this OpenMP PARALLEL directive encloses other
- # OpenMP directives. Although it is valid OpenMP if it doesn't,
- # this almost certainly indicates a user error.
- self._encloses_omp_directive()
-
- # Generate the private and firstprivate clauses
- private, fprivate, need_sync = self.infer_sharing_attributes()
- private_clause = OMPPrivateClause.create(
- sorted(private, key=lambda x: x.name))
- fprivate_clause = OMPFirstprivateClause.create(
- sorted(fprivate, key=lambda x: x.name))
- if need_sync:
- raise GenerationError(
- f"OMPParallelDirective.gen_code() does not support symbols "
- f"that need synchronisation, but found: "
- f"{[x.name for x in need_sync]}")
-
- reprod_red_call_list = self.reductions(reprod=True)
- if reprod_red_call_list:
- # we will use a private thread index variable
- thread_idx = self.scope.symbol_table.\
- lookup_with_tag("omp_thread_index")
- private_clause.addchild(Reference(thread_idx))
- thread_idx = thread_idx.name
- # declare the variable
- parent.add(DeclGen(parent, datatype="integer",
- entity_decls=[thread_idx]))
-
- calls = self.reductions()
# first check whether we have more than one reduction with the same
# name in this Schedule. If so, raise an error as this is not
# supported for a parallel region.
names = []
- for call in calls:
+ reduction_kernels = self.reductions()
+ for call in reduction_kernels:
name = call.reduction_arg.name
if name in names:
raise GenerationError(
@@ -1401,68 +1298,46 @@ def gen_code(self, parent):
f"reduction variable")
names.append(name)
- zero_reduction_variables(calls, parent)
-
- # pylint: disable=protected-access
- clauses_str = self.default_clause._clause_string
- # pylint: enable=protected-access
+ if reduction_kernels:
+ first_type = type(self.dir_body[0])
+ for child in self.dir_body.children:
+ if first_type != type(child):
+ raise GenerationError(
+ "Cannot correctly generate code for an OpenMP parallel"
+ " region with reductions and containing children of "
+ "different types.")
- private_list = [child.symbol.name for child in private_clause.children]
- if private_list:
- clauses_str += ", private(" + ",".join(private_list) + ")"
- fp_list = [child.symbol.name for child in fprivate_clause.children]
- if fp_list:
- clauses_str += ", firstprivate(" + ",".join(fp_list) + ")"
- parent.add(DirectiveGen(parent, "omp", "begin", "parallel",
- f"{clauses_str}"))
+ # pylint: disable=import-outside-toplevel
+ from psyclone.psyGen import zero_reduction_variables
+ zero_reduction_variables(reduction_kernels)
+ # Reproducible reduction will be done serially by accumulating the
+ # partial results in an array indexed by the thread index
+ reprod_red_call_list = self.reductions(reprod=True)
if reprod_red_call_list:
- # add in a local thread index
- parent.add(UseGen(parent, name="omp_lib", only=True,
- funcnames=["omp_get_thread_num"]))
- parent.add(AssignGen(parent, lhs=thread_idx,
- rhs="omp_get_thread_num()+1"))
-
- first_type = type(self.dir_body[0])
- for child in self.dir_body.children:
- if first_type != type(child):
- raise NotImplementedError("Cannot correctly generate code"
- " for an OpenMP parallel region"
- " containing children of "
- "different types")
- child.gen_code(parent)
-
- parent.add(DirectiveGen(parent, "omp", "end", "parallel", ""))
-
+ # Use a private thread index variable
+ omp_lib = self.scope.symbol_table.find_or_create(
+ "omp_lib", symbol_type=ContainerSymbol)
+ omp_get_thread_num = self.scope.symbol_table.find_or_create(
+ "omp_get_thread_num", symbol_type=RoutineSymbol,
+ interface=ImportInterface(omp_lib))
+ thread_idx = self.scope.symbol_table.find_or_create_tag(
+ "omp_thread_index", root_name="th_idx",
+ symbol_type=DataSymbol, datatype=INTEGER_TYPE)
+ assignment = Assignment.create(
+ lhs=Reference(thread_idx),
+ rhs=BinaryOperation.create(
+ BinaryOperation.Operator.ADD,
+ Call.create(omp_get_thread_num),
+ Literal("1", INTEGER_TYPE))
+ )
+ self.dir_body.addchild(assignment, 0)
+
+ # Now finish the reproducible reductions
if reprod_red_call_list:
- parent.add(CommentGen(parent, ""))
- parent.add(CommentGen(parent, " sum the partial results "
- "sequentially"))
- parent.add(CommentGen(parent, ""))
- for call in reprod_red_call_list:
- call.reduction_sum_loop(parent)
-
- # If there are nested OMPRegions, the post region code should be after
- # the top-level one
- if not self.ancestor(OMPRegionDirective):
- self.gen_post_region_code(parent)
-
- def lower_to_language_level(self):
- '''
- In-place construction of clauses as PSyIR constructs.
- At the higher level these clauses rely on dynamic variable dependence
- logic to decide what is private and what is shared, so we use this
- lowering step to find out which References are private, and place them
- explicitly in the lower-level tree to be processed by the backend
- visitor.
+ for call in reversed(reprod_red_call_list):
+ call.reduction_sum_loop()
- :returns: the lowered version of this node.
- :rtype: :py:class:`psyclone.psyir.node.Node`
-
- :raises GenerationError: if the OpenMP directive needs some
- synchronisation mechanism to create valid
- code. These are not implemented yet.
- '''
# Keep the first two children and compute the rest using the current
# state of the node/tree (lowering it first in case new symbols are
# created)
@@ -1473,15 +1348,17 @@ def lower_to_language_level(self):
# Create data sharing clauses (order alphabetically to make generation
# reproducible)
private, fprivate, need_sync = self.infer_sharing_attributes()
+ if reprod_red_call_list:
+ private.add(thread_idx)
private_clause = OMPPrivateClause.create(
sorted(private, key=lambda x: x.name))
fprivate_clause = OMPFirstprivateClause.create(
sorted(fprivate, key=lambda x: x.name))
# Check all of the need_sync nodes are synchronized in children.
+ # unless it has reduction_kernels which are handled separately
sync_clauses = self.walk(OMPDependClause)
- if need_sync:
+ if not reduction_kernels and need_sync:
for sym in need_sync:
- found = False
for clause in sync_clauses:
# Needs to be an out depend clause to synchronize
if clause.operand == "in":
@@ -1489,10 +1366,8 @@ def lower_to_language_level(self):
# Check if the symbol is in this depend clause.
if sym.name in [child.symbol.name for child in
clause.children]:
- found = True
- if found:
break
- if not found:
+ else:
raise GenerationError(
f"Lowering '{type(self).__name__}' does not support "
f"symbols that need synchronisation unless they are "
@@ -1501,6 +1376,7 @@ def lower_to_language_level(self):
self.addchild(private_clause)
self.addchild(fprivate_clause)
+
return self
def begin_string(self):
@@ -1851,43 +1727,6 @@ def validate_global_constraints(self):
super().validate_global_constraints()
- def gen_code(self, parent):
- '''
- Generate the f2pygen AST entries in the Schedule for this OpenMP
- taskloop directive.
-
- :param parent: the parent Node in the Schedule to which to add our
- content.
- :type parent: sub-class of :py:class:`psyclone.f2pygen.BaseGen`
- :raises GenerationError: if this "!$omp taskloop" is not enclosed
- within an OMP Parallel region and an OMP
- Serial region.
-
- '''
- self.validate_global_constraints()
-
- extra_clauses = ""
- # Find the specified clauses
- clause_list = []
- if self._grainsize is not None:
- clause_list.append(f"grainsize({self._grainsize})")
- if self._num_tasks is not None:
- clause_list.append(f"num_tasks({self._num_tasks})")
- if self._nogroup:
- clause_list.append("nogroup")
- # Generate the string containing the required clauses
- extra_clauses = ", ".join(clause_list)
-
- parent.add(DirectiveGen(parent, "omp", "begin", "taskloop",
- extra_clauses))
-
- self.dir_body.gen_code(parent)
-
- # make sure the directive occurs straight after the loop body
- position = parent.previous_loop()
- parent.add(DirectiveGen(parent, "omp", "end", "taskloop", ""),
- position=["after", position])
-
def begin_string(self):
'''Returns the beginning statement of this directive, i.e.
"omp taskloop ...". The visitor is responsible for adding the
@@ -1943,6 +1782,10 @@ def __init__(self, omp_schedule="none", collapse=None, reprod=None,
self._omp_schedule = omp_schedule
self._collapse = None
self.collapse = collapse # Use setter with error checking
+ # TODO #514 - reductions are only implemented in LFRic, for now we
+ # store the needed clause when lowering, but this needs a better
+ # solution
+ self._lowered_reduction_string = ""
def __eq__(self, other):
'''
@@ -1973,11 +1816,6 @@ def collapse(self):
@collapse.setter
def collapse(self, value):
'''
- TODO #1648: Note that gen_code ignores the collapse clause but the
- generated code is still valid. Since gen_code is going to be removed
- and it is only used for LFRic (which does not support GPU offloading
- that gets improved with the collapse clause) it will not be supported.
-
:param value: optional number of nested loop to collapse into a
single iteration space to parallelise. Defaults to None.
:type value: int or NoneType.
@@ -2140,48 +1978,18 @@ def _validate_single_loop(self):
f"this Node has a child of type "
f"'{type(self.dir_body[0]).__name__}'")
- def gen_code(self, parent):
+ def lower_to_language_level(self):
'''
- Generate the f2pygen AST entries in the Schedule for this OpenMP do
- directive.
-
- TODO #1648: Note that gen_code ignores the collapse clause but the
- generated code is still valid. Since gen_code is going to be removed
- and it is only used for LFRic (which does not support GPU offloading
- that gets improved with the collapse clause) it will not be supported.
+ In-place construction of clauses as PSyIR constructs.
+ The clauses here may need to be updated if code has changed, or be
+ added if not yet present.
- :param parent: the parent Node in the Schedule to which to add our
- content.
- :type parent: sub-class of :py:class:`psyclone.f2pygen.BaseGen`
- :raises GenerationError: if this "!$omp do" is not enclosed within
- an OMP Parallel region.
+ :returns: the lowered version of this node.
+ :rtype: :py:class:`psyclone.psyir.node.Node`
'''
- self.validate_global_constraints()
-
- parts = []
-
- if self.omp_schedule != "none":
- parts.append(f"schedule({self.omp_schedule})")
-
- if not self._reprod:
- red_str = self._reduction_string()
- if red_str:
- parts.append(red_str)
-
- # As we're a loop we don't specify the scope
- # of any variables so we don't have to generate the
- # list of private variables
- options = ", ".join(parts)
- parent.add(DirectiveGen(parent, "omp", "begin", "do", options))
-
- for child in self.children:
- child.gen_code(parent)
-
- # make sure the directive occurs straight after the loop body
- position = parent.previous_loop()
- parent.add(DirectiveGen(parent, "omp", "end", "do", ""),
- position=["after", position])
+ self._lowered_reduction_string = self._reduction_string()
+ return super().lower_to_language_level()
def begin_string(self):
'''Returns the beginning statement of this directive, i.e.
@@ -2197,6 +2005,8 @@ def begin_string(self):
string += f" schedule({self.omp_schedule})"
if self._collapse:
string += f" collapse({self._collapse})"
+ if self._lowered_reduction_string:
+ string += f", {self._lowered_reduction_string}"
return string
def end_string(self):
@@ -2272,91 +2082,6 @@ def _validate_child(position, child):
return True
return False
- def gen_code(self, parent):
- '''
- Generate the f2pygen AST entries in the Schedule for this OpenMP
- directive.
-
- TODO #1648: Note that gen_code ignores the collapse clause but the
- generated code is still valid. Since gen_code is going to be removed
- and it is only used for LFRic (which does not support GPU offloading
- that gets improved with the collapse clause) it will not be supported.
-
- :param parent: the parent Node in the Schedule to which to add our
- content.
- :type parent: sub-class of :py:class:`psyclone.f2pygen.BaseGen`
-
- '''
- # We're not doing nested parallelism so make sure that this
- # omp parallel do is not already within some parallel region
- # pylint: disable=import-outside-toplevel
- from psyclone.psyGen import zero_reduction_variables
- self.validate_global_constraints()
-
- calls = self.reductions()
- zero_reduction_variables(calls, parent)
-
- # Set default() private() and firstprivate() clauses
- # pylint: disable=protected-access
- default_str = self.children[1]._clause_string
- # pylint: enable=protected-access
- private, fprivate, need_sync = self.infer_sharing_attributes()
- private_clause = OMPPrivateClause.create(
- sorted(private, key=lambda x: x.name))
- fprivate_clause = OMPFirstprivateClause.create(
- sorted(fprivate, key=lambda x: x.name))
- if need_sync:
- raise GenerationError(
- f"OMPParallelDoDirective.gen_code() does not support symbols "
- f"that need synchronisation, but found: "
- f"{[x.name for x in need_sync]}")
-
- private_str = ""
- fprivate_str = ""
- private_list = [child.symbol.name for child in private_clause.children]
- if private_list:
- private_str = "private(" + ",".join(private_list) + ")"
- fp_list = [child.symbol.name for child in fprivate_clause.children]
- if fp_list:
- fprivate_str = "firstprivate(" + ",".join(fp_list) + ")"
-
- # Set schedule clause
- if self._omp_schedule != "none":
- schedule_str = f"schedule({self._omp_schedule})"
- else:
- schedule_str = ""
-
- # Add directive to the f2pygen tree
- parent.add(
- DirectiveGen(
- parent, "omp", "begin", self._directive_string, ", ".join(
- text for text in [default_str, private_str, fprivate_str,
- schedule_str, self._reduction_string()]
- if text)))
-
- for child in self.dir_body:
- child.gen_code(parent)
-
- # make sure the directive occurs straight after the loop body
- position = parent.previous_loop()
-
- # DirectiveGen only accepts 3 terms, e.g. "omp end loop", so for longer
- # directive e.g. "omp end teams distribute parallel do", we split them
- # between arguments and content (which is an additional string appended
- # at the end)
- terms = self.end_string().split()
- # If its < 3 the array slices still work as expected
- arguments = terms[:3]
- content = " ".join(terms[3:])
-
- parent.add(DirectiveGen(parent, *arguments, content=content),
- position=["after", position])
-
- # If there are nested OMPRegions, the post region code should be after
- # the top-level one
- if not self.ancestor(OMPRegionDirective):
- self.gen_post_region_code(parent)
-
def lower_to_language_level(self):
'''
In-place construction of clauses as PSyIR constructs.
@@ -2369,6 +2094,7 @@ def lower_to_language_level(self):
'''
# Calling the super() explicitly to avoid confusion
# with the multiple-inheritance
+ self._lowered_reduction_string = self._reduction_string()
OMPParallelDirective.lower_to_language_level(self)
self.addchild(OMPScheduleClause(self._omp_schedule))
return self
@@ -2385,7 +2111,8 @@ def begin_string(self):
string = f"omp {self._directive_string}"
if self._collapse:
string += f" collapse({self._collapse})"
- string += self._reduction_string()
+ if self._lowered_reduction_string:
+ string += f" {self._lowered_reduction_string}"
return string
def end_string(self):
@@ -2447,31 +2174,6 @@ def end_string(self):
'''
return "omp end target"
- def gen_code(self, parent):
- '''Generate the OpenMP Target Directive and any associated code.
-
- :param parent: the parent Node in the Schedule to which to add our
- content.
- :type parent: sub-class of :py:class:`psyclone.f2pygen.BaseGen`
- '''
- # Check the constraints are correct
- self.validate_global_constraints()
-
- # Generate the code for this Directive
- parent.add(DirectiveGen(parent, "omp", "begin", "target"))
-
- # Generate the code for all of this node's children
- for child in self.dir_body:
- child.gen_code(parent)
-
- # Generate the end code for this node
- parent.add(DirectiveGen(parent, "omp", "end", "target", ""))
-
- # If there are nested OMPRegions, the post region code should be after
- # the top-level one
- if not self.ancestor(OMPRegionDirective):
- self.gen_post_region_code(parent)
-
class OMPLoopDirective(OMPRegionDirective):
''' Class for the !$OMP LOOP directive that specifies that the iterations
@@ -2516,11 +2218,6 @@ def collapse(self):
@collapse.setter
def collapse(self, value):
'''
- TODO #1648: Note that gen_code ignores the collapse clause but the
- generated code is still valid. Since gen_code is going to be removed
- and it is only used for LFRic (which does not support GPU offloading
- that gets improved with the collapse clause) it will not be supported.
-
:param value: optional number of nested loop to collapse into a
single iteration space to parallelise. Defaults to None.
:type value: int or NoneType.
diff --git a/src/psyclone/psyir/nodes/profile_node.py b/src/psyclone/psyir/nodes/profile_node.py
index 9243ebd684..1cf4d90259 100644
--- a/src/psyclone/psyir/nodes/profile_node.py
+++ b/src/psyclone/psyir/nodes/profile_node.py
@@ -38,7 +38,6 @@
''' This module provides support for adding profiling to code
generated by PSyclone. '''
-from __future__ import absolute_import, print_function
from psyclone.psyir.nodes.psy_data_node import PSyDataNode
diff --git a/src/psyclone/psyir/nodes/psy_data_node.py b/src/psyclone/psyir/nodes/psy_data_node.py
index d82b2b0c3c..983c258c35 100644
--- a/src/psyclone/psyir/nodes/psy_data_node.py
+++ b/src/psyclone/psyir/nodes/psy_data_node.py
@@ -50,7 +50,6 @@
from psyclone.configuration import Config
from psyclone.core import Signature
from psyclone.errors import InternalError, GenerationError
-from psyclone.f2pygen import CallGen, TypeDeclGen, UseGen
from psyclone.psyir.nodes.codeblock import CodeBlock
from psyclone.psyir.nodes.container import Container
from psyclone.psyir.nodes.file_container import FileContainer
@@ -160,7 +159,7 @@ def __init__(self, ast=None, children=None, parent=None, options=None):
# query the actual name of a region (e.g. during generation of a driver
# for an extract node). If the user does not define a name, i.e.
# module_name and region_name are empty, a unique name will be
- # computed in gen_code() or lower_to_language_level(). If this name was
+ # computed in lower_to_language_level(). If this name was
# stored in module_name and region_name, and gen() is called again, the
# names would not be computed again, since the code detects already
# defined module and region names. This can then result in duplicated
@@ -169,10 +168,10 @@ def __init__(self, ast=None, children=None, parent=None, options=None):
# another profile region is added, and gen() is called again. The
# second profile region would compute a new name, which then happens
# to be the same as the name computed for the first region in the
- # first gen_code call (which indeed implies that the name of the
+ # first lowering call (which indeed implies that the name of the
# first profile region is different the second time it is computed).
# So in order to guarantee that the computed module and region names
- # are unique when gen_code is called more than once, we
+ # are unique when lowering is called more than once, we
# cannot store a computed name in module_name and region_name.
self._region_identifier = ("", "")
# Name of the region.
@@ -475,20 +474,6 @@ def psy_data_body(self):
f"{[type(child).__name__ for child in self.children]}")
return self.children[0]
- # -------------------------------------------------------------------------
- def _add_call(self, name, parent, arguments=None):
- '''This function adds a call to the specified (type-bound) method of
- self._var_name to the parent.
-
- :param str name: name of the method to call.
- :param parent: parent node into which to insert the calls.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param arguments: optional arguments for the method call.
- :type arguments: Optional[list[str]]
- '''
- call = CallGen(parent, f"{self._var_name}%{name}", arguments)
- parent.add(call)
-
# -------------------------------------------------------------------------
def _create_unique_names(self, var_list, symbol_table):
'''This function takes a list of (module_name, signature) tuple, and
@@ -512,10 +497,14 @@ def _create_unique_names(self, var_list, symbol_table):
out_list = []
for (module_name, signature) in var_list:
if module_name:
- var_symbol = \
- symbol_table.find_or_create_tag(tag=f"{signature[0]}"
- f"@{module_name}",
- root_name=signature[0])
+ container = symbol_table.find_or_create(
+ module_name, symbol_type=ContainerSymbol)
+ var_symbol = symbol_table.find_or_create_tag(
+ tag=f"{signature[0]}"
+ f"@{module_name}",
+ root_name=signature[0],
+ interface=ImportInterface(container))
+
unique_sig = Signature(var_symbol.name, signature[1:])
else:
# This is a local variable anyway, no need to rename:
@@ -523,198 +512,6 @@ def _create_unique_names(self, var_list, symbol_table):
out_list.append((module_name, signature, unique_sig))
return out_list
- # -------------------------------------------------------------------------
- def gen_code(self, parent, options=None):
- # pylint: disable=arguments-differ, too-many-branches
- # pylint: disable=too-many-statements
- '''Creates the PSyData code before and after the children
- of this node.
-
- TODO #1010: This method and the lower_to_language_level below contain
- duplicated logic, the gen_code method will be deleted when all APIs can
- use the PSyIR backends.
-
- :param parent: the parent of this node in the f2pygen AST.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- :param options: a dictionary with options for transformations.
- :type options: Optional[dict[str, Any]]
- :param options["pre_var_list"]: container name and variable name to \
- be supplied before the first child. The container name is \
- supported to be able to handle variables that are imported from \
- a different container (module in Fortran).
- :type options["pre_var_list"]: list[tuple[str, str]]
- :param options["post_var_list"]: container name and variable name to \
- be supplied after the last child. The container name is \
- supported to be able to handle variables that are imported from \
- a different container (module in Fortran).
- :type options["post_var_list"]: list[tuple[str, str]]
- :param str options["pre_var_postfix"]: an optional postfix that will \
- be added to each variable name in the pre_var_list.
- :param str options["post_var_postfix"]: an optional postfix that will \
- be added to each variable name in the post_var_list.
-
- '''
- # Avoid circular dependency
- # pylint: disable=import-outside-toplevel
- from psyclone.psyGen import Kern, InvokeSchedule
- invoke = self.ancestor(InvokeSchedule).invoke
- global_module_name = self._module_name
- if global_module_name is None:
- # The user has not supplied a module (location) name so
- # return the psy-layer module name as this will be unique
- # for each PSyclone algorithm file.
- global_module_name = invoke.invokes.psy.name
-
- region_name = self._region_name
- if region_name is None:
- # The user has not supplied a region name (to identify
- # this particular invoke region). Use the invoke name as a
- # starting point.
- region_name = invoke.name
- kerns = self.walk(Kern)
- if len(kerns) == 1:
- # This PSyData region only has one kernel within it,
- # so append the kernel name.
- region_name += f"-{kerns[0].name}"
- # Add a region index to ensure uniqueness when there are
- # multiple regions in an invoke.
- psy_data_nodes = self.root.walk(PSyDataNode)
- # We can't just use .index on the list because we are searching
- # by identity, not by equality.
- idx = None
- for index, node in enumerate(psy_data_nodes):
- if node is self:
- idx = index
- break
- region_name += f"-r{idx}"
-
- if not options:
- options = {}
-
- # Get the list of variables, and handle name clashes: a now newly
- # imported symbol (from a module that is used directly or indirectly
- # from a kernel) might clash with a local variable. Convert the lists
- # of 2-tuples (module_name, signature) to a list of 3-tuples
- # (module_name, signature, unique_signature):
-
- symbol_table = self.scope.symbol_table
- pre_variable_list = \
- self._create_unique_names(options.get("pre_var_list", []),
- symbol_table)
- post_variable_list = \
- self._create_unique_names(options.get("post_var_list", []),
- symbol_table)
-
- pre_suffix = options.get("pre_var_postfix", "")
- post_suffix = options.get("post_var_postfix", "")
- for module_name, signature, unique_signature in (pre_variable_list +
- post_variable_list):
- if module_name:
- if unique_signature != signature:
- rename = f"{unique_signature[0]}=>{signature[0]}"
- use = UseGen(parent, module_name, only=True,
- funcnames=[rename])
- else:
- use = UseGen(parent, module_name, only=True,
- funcnames=[unique_signature[0]])
- parent.add(use)
-
- # Note that adding a use statement makes sure it is only
- # added once, so we don't need to test this here!
- use = UseGen(parent, self.fortran_module, only=True,
- funcnames=[sym.name for sym in self.imported_symbols])
- parent.add(use)
- # We only set the visibility of this symbol if we are *not* within
- # a Routine.
- set_private = self.ancestor(Routine) is None
- var_decl = TypeDeclGen(parent,
- datatype=self.type_name,
- entity_decls=[self._var_name],
- save=True, target=True, private=set_private)
- parent.add(var_decl)
-
- self._add_call("PreStart", parent,
- [f"\"{global_module_name}\"",
- f"\"{region_name}\"",
- len(pre_variable_list),
- len(post_variable_list)])
- self.set_region_identifier(global_module_name, region_name)
- has_var = pre_variable_list or post_variable_list
-
- # Each variable name can be given a suffix. The reason for
- # this feature is that a library might have to distinguish if
- # a variable is both in the pre- and post-variable list.
- # Consider a NetCDF file that is supposed to store a
- # variable that is read (i.e. it is in the pre-variable
- # list) and written (it is also in the post-variable
- # list). Since a NetCDF file uses the variable name as a key,
- # there must be a way to distinguish these two variables.
- # The application could for example give all variables in
- # the post-variable list a suffix like "_post" to create
- # a different key in the NetCDF file, allowing it to store
- # values of a variable "A" as "A" in the pre-variable list,
- # and store the modified value of "A" later as "A_post".
- if has_var:
- for module_name, sig, unique_sig in pre_variable_list:
- if module_name:
- module_name = f"@{module_name}"
- self._add_call("PreDeclareVariable", parent,
- [f"\"{sig}{module_name}{pre_suffix}\"",
- unique_sig])
- for module_name, sig, unique_sig in post_variable_list:
- if module_name:
- module_name = f"@{module_name}"
- self._add_call("PreDeclareVariable", parent,
- [f"\"{sig}{post_suffix}{module_name}\"",
- unique_sig])
-
- self._add_call("PreEndDeclaration", parent)
-
- for module_name, sig, unique_sig in pre_variable_list:
- if module_name:
- module_name = f"@{module_name}"
- self._add_call("ProvideVariable", parent,
- [f"\"{sig}{module_name}{pre_suffix}\"",
- unique_sig])
-
- self._add_call("PreEnd", parent)
-
- for child in self.psy_data_body:
- child.gen_code(parent)
-
- if has_var:
- # Only add PostStart() if there is at least one variable.
- self._add_call("PostStart", parent)
- for module_name, sig, unique_sig in post_variable_list:
- if module_name:
- module_name = f"@{module_name}"
- self._add_call("ProvideVariable", parent,
- [f"\"{sig}{post_suffix}{module_name}\"",
- unique_sig])
-
- self._add_call("PostEnd", parent)
-
- def fix_gen_code(self, parent):
- '''This function might be called from LFRicLoop.gen_code if a PSyData
- node is inside a loop (typically they are outside of the loop and the
- code creation in LFRIc is still fully handled by gen_code). In this
- case the symbol for the variable is added to the symbol table, but
- nothing adds this symbol to the fparser tree of the parent. So while
- we are still having a mixture of gen_code and PSyir for LFRic
- (TODO #1010), we need to manually declare this variable in the
- fparser tree:
-
- :parent: the parent node in the AST to which the declaration is added.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
-
- '''
- set_private = self.ancestor(Routine) is None
- var_decl = TypeDeclGen(parent,
- datatype=self.type_name,
- entity_decls=[self._var_name],
- save=True, target=True, private=set_private)
- parent.add(var_decl)
-
def lower_to_language_level(self, options=None):
# pylint: disable=arguments-differ
# pylint: disable=too-many-branches, too-many-statements
@@ -783,9 +580,6 @@ def gen_type_bound_call(typename, methodname, argument_list=None,
return CodeBlock([fp2_node], CodeBlock.Structure.STATEMENT,
annotations=annotations)
- for child in self.children:
- child.lower_to_language_level()
-
routine_schedule = self.ancestor(Routine)
if routine_schedule is None:
raise GenerationError(
@@ -807,6 +601,14 @@ def gen_type_bound_call(typename, methodname, argument_list=None,
if self._region_name:
region_name = self._region_name
else:
+ from psyclone.psyGen import Kern
+ kerns = self.walk(Kern)
+ if len(kerns) == 1:
+ # This PSyData region only has one kernel within it,
+ # so append the kernel name.
+ region_name = f"{kerns[0].name}-"
+ else:
+ region_name = ""
# Create a name for this region by finding where this PSyDataNode
# is in the list of PSyDataNodes in this Invoke. We allow for any
# previously lowered PSyDataNodes by checking for CodeBlocks with
@@ -817,17 +619,19 @@ def gen_type_bound_call(typename, methodname, argument_list=None,
if (isinstance(node, PSyDataNode) or
"psy-data-start" in node.annotations):
region_idx += 1
+ region_name = f"{region_name}r{region_idx}"
# If the routine name is not used as 'module name' (in case of a
# subroutine outside of any modules), add the routine name
# to the region. Otherwise just use the number
if module_name != routine_schedule.name:
- region_name = f"{routine_schedule.name}-r{region_idx}"
- else:
- region_name = f"r{region_idx}"
+ region_name = f"{routine_schedule.name}-{region_name}"
if not options:
options = {}
+ for child in self.children:
+ child.lower_to_language_level()
+
symbol_table = self.scope.symbol_table
pre_variable_list = \
self._create_unique_names(options.get("pre_var_list", []),
@@ -872,7 +676,7 @@ def gen_type_bound_call(typename, methodname, argument_list=None,
module_name = f"@{module_name}"
call = gen_type_bound_call(
self._var_name, "PreDeclareVariable",
- [f"\"{sig}{module_name}{pre_suffix}\"", unique_sig])
+ [f"\"{sig}{pre_suffix}{module_name}\"", unique_sig])
self.parent.children.insert(self.position, call)
for module_name, sig, unique_sig in post_variable_list:
@@ -880,7 +684,7 @@ def gen_type_bound_call(typename, methodname, argument_list=None,
module_name = f"@{module_name}"
call = gen_type_bound_call(
self._var_name, "PreDeclareVariable",
- [f"\"{sig}{module_name}{post_suffix}\"", unique_sig])
+ [f"\"{sig}{post_suffix}{module_name}\"", unique_sig])
self.parent.children.insert(self.position, call)
call = gen_type_bound_call(self._var_name, "PreEndDeclaration")
@@ -891,7 +695,7 @@ def gen_type_bound_call(typename, methodname, argument_list=None,
module_name = f"@{module_name}"
call = gen_type_bound_call(
self._var_name, "ProvideVariable",
- [f"\"{sig}{module_name}{pre_suffix}\"", unique_sig])
+ [f"\"{sig}{pre_suffix}{module_name}\"", unique_sig])
self.parent.children.insert(self.position, call)
call = gen_type_bound_call(self._var_name, "PreEnd")
@@ -911,7 +715,7 @@ def gen_type_bound_call(typename, methodname, argument_list=None,
module_name = f"@{module_name}"
call = gen_type_bound_call(
self._var_name, "ProvideVariable",
- [f"\"{sig}{module_name}{post_suffix}\"", unique_sig])
+ [f"\"{sig}{post_suffix}{module_name}\"", unique_sig])
self.parent.children.insert(self.position, call)
# PSyData end call
diff --git a/src/psyclone/psyir/nodes/schedule.py b/src/psyclone/psyir/nodes/schedule.py
index 28dc49774d..04fc0f2a31 100644
--- a/src/psyclone/psyir/nodes/schedule.py
+++ b/src/psyclone/psyir/nodes/schedule.py
@@ -84,17 +84,6 @@ def __str__(self):
result += "End " + self.coloured_name(False)
return result
- def gen_code(self, parent):
- '''
- A Schedule does not have any direct Fortran representation. We just
- call gen_code() for all of its children.
-
- :param parent: node in the f2pygen AST to which to add content.
- :type parent: :py:class:`psyclone.f2pygen.BaseGen`
- '''
- for child in self.children:
- child.gen_code(parent)
-
# For AutoAPI documentation generation
__all__ = ['Schedule']
diff --git a/src/psyclone/psyir/nodes/value_range_check_node.py b/src/psyclone/psyir/nodes/value_range_check_node.py
index ac5becdca1..fcbdab8d11 100644
--- a/src/psyclone/psyir/nodes/value_range_check_node.py
+++ b/src/psyclone/psyir/nodes/value_range_check_node.py
@@ -86,23 +86,6 @@ def _get_var_lists(self):
return {'pre_var_list': read_write_info.read_list,
'post_var_list': read_write_info.write_list}
- def gen_code(self, parent, options=None):
- '''Old style code creation function.
-
- :param parent: f2pygen node to which to add AST nodes.
- :type parent: :py:class:`psyclone.f2pygen.SubroutineGen`
- :param options: a dictionary with options for transformations
- and validation.
- :type options: Optional[Dict[str, Any]]
-
- '''
- local_options = options.copy() if options else {}
-
- var_lists_options = self._get_var_lists()
- local_options.update(var_lists_options)
-
- super().gen_code(parent, local_options)
-
def lower_to_language_level(self):
# pylint: disable=arguments-differ
'''
diff --git a/src/psyclone/psyir/symbols/data_type_symbol.py b/src/psyclone/psyir/symbols/data_type_symbol.py
index 880225ee07..75be282749 100644
--- a/src/psyclone/psyir/symbols/data_type_symbol.py
+++ b/src/psyclone/psyir/symbols/data_type_symbol.py
@@ -37,7 +37,6 @@
''' This module contains the DataTypeSymbol. '''
-from __future__ import absolute_import
from psyclone.psyir.symbols.symbol import Symbol
diff --git a/src/psyclone/psyir/symbols/symbol_table.py b/src/psyclone/psyir/symbols/symbol_table.py
index 9c45b5f3ba..661fb110f6 100644
--- a/src/psyclone/psyir/symbols/symbol_table.py
+++ b/src/psyclone/psyir/symbols/symbol_table.py
@@ -286,7 +286,12 @@ def deep_copy(self):
# Prepare the new tag dict
for tag, symbol in self._tags.items():
- new_st._tags[tag] = new_st.lookup(symbol.name)
+ try:
+ new_st._tags[tag] = new_st.lookup(symbol.name)
+ except KeyError:
+ # TODO 898: If the lookup fails it means that the symbol was
+ # removed from the symbol table but not the tags dictionary
+ pass
# Update any references to Symbols within Symbols (initial values,
# precision etc.)
@@ -1278,8 +1283,9 @@ def insert_argument(self, index, argument):
def append_argument(self, argument):
'''
- Append a new argument to the argument list and add it in the symbol
- table itself.
+ Append the given argument to the argument list and add it in the symbol
+ table itself. If the argument is already part of the argument_list it
+ does nothing.
:param argument: the new argument to add to the list.
:type argument: :py:class:`psyclone.psyir.symbols.DataSymbol`
@@ -1298,9 +1304,12 @@ def append_argument(self, argument):
raise ValueError(
f"DataSymbol '{argument.name}' is not marked as a kernel "
"argument.")
+ if argument in self._argument_list:
+ return
self._argument_list.append(argument)
- self.add(argument)
+ if argument not in self.get_symbols().values():
+ self.add(argument)
try:
self._validate_arg_list(self._argument_list)
diff --git a/src/psyclone/psyir/transformations/loop_fuse_trans.py b/src/psyclone/psyir/transformations/loop_fuse_trans.py
index 6b28b44c52..b8c915242c 100644
--- a/src/psyclone/psyir/transformations/loop_fuse_trans.py
+++ b/src/psyclone/psyir/transformations/loop_fuse_trans.py
@@ -40,9 +40,9 @@
class for all API-specific loop fusion transformations.
'''
-from psyclone.core import SymbolicMaths
+from psyclone.core import SymbolicMaths, VariablesAccessInfo
from psyclone.domain.common.psylayer import PSyLoop
-from psyclone.psyir.nodes import Reference
+from psyclone.psyir.nodes import Reference, Routine
from psyclone.psyir.tools import DependencyTools
from psyclone.psyir.transformations.loop_trans import LoopTrans
from psyclone.psyir.transformations.transformation_error import \
@@ -198,6 +198,27 @@ def apply(self, node1, node2, options=None):
# Add loop contents of node2 to node1
node1.loop_body.children.extend(node2.loop_body.pop_all_children())
+ # We need to remove all leftover references because lfric is compiled
+ # with '-Werror=unused-variable'. Since we have fused loops, we only
+ # need to look at the symbols appearing in the loop control of the
+ # second loop, as these are the ones that have been detached.
+ routine = node1.ancestor(Routine)
+ if routine:
+ remaining_names = {sig.var_name for sig in
+ VariablesAccessInfo(routine).all_signatures}
+ del_names = {sig.var_name for sig in
+ VariablesAccessInfo(node2.start_expr).all_signatures +
+ VariablesAccessInfo(node2.stop_expr).all_signatures +
+ VariablesAccessInfo(node2.step_expr).all_signatures}
+ for name in del_names:
+ if name not in remaining_names:
+ rsym = node1.scope.symbol_table.lookup(name)
+ if rsym.is_automatic:
+ symtab = rsym.find_symbol_table(node1)
+ # TODO #898: Implement symbol removal
+ # pylint: disable=protected-access
+ symtab._symbols.pop(rsym.name)
+
# For automatic documentation generation
__all__ = ["LoopFuseTrans"]
diff --git a/src/psyclone/psyir/transformations/omp_loop_trans.py b/src/psyclone/psyir/transformations/omp_loop_trans.py
index 8ae32e3f75..b715d81da8 100644
--- a/src/psyclone/psyir/transformations/omp_loop_trans.py
+++ b/src/psyclone/psyir/transformations/omp_loop_trans.py
@@ -37,10 +37,9 @@
from psyclone.configuration import Config
from psyclone.psyir.nodes import (
- Routine, OMPDoDirective, OMPLoopDirective, OMPParallelDoDirective,
+ OMPDoDirective, OMPLoopDirective, OMPParallelDoDirective,
OMPTeamsDistributeParallelDoDirective, OMPTeamsLoopDirective,
OMPScheduleClause)
-from psyclone.psyir.symbols import DataSymbol, INTEGER_TYPE
from psyclone.psyir.transformations.parallel_loop_trans import \
ParallelLoopTrans
@@ -247,23 +246,4 @@ def apply(self, node, options=None):
self._reprod = options.get("reprod",
Config.get().reproducible_reductions)
- if self._reprod:
- # When reprod is True, the variables th_idx and nthreads are
- # expected to be declared in the scope.
- root = node.ancestor(Routine)
-
- symtab = root.symbol_table
- try:
- symtab.lookup_with_tag("omp_thread_index")
- except KeyError:
- symtab.new_symbol(
- "th_idx", tag="omp_thread_index",
- symbol_type=DataSymbol, datatype=INTEGER_TYPE)
- try:
- symtab.lookup_with_tag("omp_num_threads")
- except KeyError:
- symtab.new_symbol(
- "nthreads", tag="omp_num_threads",
- symbol_type=DataSymbol, datatype=INTEGER_TYPE)
-
super().apply(node, options)
diff --git a/src/psyclone/psyir/transformations/omp_taskwait_trans.py b/src/psyclone/psyir/transformations/omp_taskwait_trans.py
index 321389cd76..d590341d48 100644
--- a/src/psyclone/psyir/transformations/omp_taskwait_trans.py
+++ b/src/psyclone/psyir/transformations/omp_taskwait_trans.py
@@ -37,7 +37,6 @@
''' This module provides the OMPTaskwaitTrans transformation that can be
applied to an OMPParallelDirective to satisfy any task-based dependencies
created by OpenMP Taskloops.'''
-from __future__ import absolute_import, print_function
from psyclone.core import VariablesAccessInfo
from psyclone.errors import LazyString, InternalError
@@ -203,14 +202,14 @@ def get_forward_dependence(taskloop, root):
The forward dependency is never a child of taskloop, and must have
abs_position > taskloop.abs_position
- :param taskloop: the taskloop node for which to find the \
- forward_dependence.
+ :param taskloop: the taskloop node for which to find the
+ forward_dependence.
:type taskloop: :py:class:`psyclone.psyir.nodes.OMPTaskloopDirective`
:param root: the tree in which to search for the forward_dependence.
:type root: :py:class:`psyclone.psyir.nodes.OMPParallelDirective`
:returns: the forward_dependence of taskloop.
- :rtype: :py:class:`psyclone.f2pygen.Node`
+ :rtype: :py:class:`psyclone.psyir.nodes.Node`
'''
# Check supplied the correct type for root
diff --git a/src/psyclone/psyir/transformations/parallel_loop_trans.py b/src/psyclone/psyir/transformations/parallel_loop_trans.py
index b73213c82c..ebffee5346 100644
--- a/src/psyclone/psyir/transformations/parallel_loop_trans.py
+++ b/src/psyclone/psyir/transformations/parallel_loop_trans.py
@@ -287,7 +287,7 @@ def apply(self, node, options=None):
end do
!$OMP END DO
- At code-generation time (when gen_code()` is called), this node must be
+ At code-generation time (when lowering is called), this node must be
within (i.e. a child of) a PARALLEL region.
:param node: the supplied node to which we will apply the
diff --git a/src/psyclone/psyir/transformations/read_only_verify_trans.py b/src/psyclone/psyir/transformations/read_only_verify_trans.py
index c249d7393e..f4bf2eb821 100644
--- a/src/psyclone/psyir/transformations/read_only_verify_trans.py
+++ b/src/psyclone/psyir/transformations/read_only_verify_trans.py
@@ -39,7 +39,6 @@
a region of code."
'''
-from __future__ import absolute_import
from psyclone.psyGen import BuiltIn, Kern
from psyclone.psyir.nodes import (Literal, Loop, ReadOnlyVerifyNode, Directive,
Reference, Schedule, OMPParallelDirective,
diff --git a/src/psyclone/tests/core/component_indices_test.py b/src/psyclone/tests/core/component_indices_test.py
index 038623ccde..4f00279ecd 100644
--- a/src/psyclone/tests/core/component_indices_test.py
+++ b/src/psyclone/tests/core/component_indices_test.py
@@ -35,7 +35,6 @@
'''This module tests the ComponentIndices class in psyclone/core.'''
-from __future__ import absolute_import
import pytest
from psyclone.core import ComponentIndices, VariablesAccessInfo
diff --git a/src/psyclone/tests/core/signature_test.py b/src/psyclone/tests/core/signature_test.py
index 9a0e2736e8..734b2e133f 100644
--- a/src/psyclone/tests/core/signature_test.py
+++ b/src/psyclone/tests/core/signature_test.py
@@ -36,7 +36,6 @@
'''This module tests the Signature class.'''
-from __future__ import absolute_import
import pytest
from psyclone.core import ComponentIndices, Signature
diff --git a/src/psyclone/tests/core/variables_access_info_test.py b/src/psyclone/tests/core/variables_access_info_test.py
index 33665f9a54..d1a258703a 100644
--- a/src/psyclone/tests/core/variables_access_info_test.py
+++ b/src/psyclone/tests/core/variables_access_info_test.py
@@ -434,7 +434,7 @@ def test_variables_access_info_domain_loop():
assert str(vai) == (
"a: READ, b: READ, f1_data: READWRITE, f2_data: "
"READWRITE, field_type: NO_DATA_ACCESS, i_def: NO_DATA_ACCESS, "
- "map_w3: READ, mesh_type: NO_DATA_ACCESS, ncell_2d_no_halos: "
+ "map_w3: READ, ncell_2d_no_halos: "
"READ, ndf_w3: READ, nlayers_f1: READ, nlayers_f2: READ, "
"r_def: NO_DATA_ACCESS, undf_w3: READ")
@@ -460,5 +460,6 @@ def test_lfric_access_info():
"NO_DATA_ACCESS, loop0_start: READ, loop0_stop: READ, m1_data: READ, "
"m2_data: READ, map_w1: READ, map_w2: READ, map_w3: READ, ndf_w1: "
"READ, ndf_w2: READ, ndf_w3: READ, nlayers_f1: READ, np_xy_qr: READ, "
- "np_z_qr: READ, r_def: NO_DATA_ACCESS, undf_w1: READ, undf_w2: READ, "
+ "np_z_qr: READ, quadrature_xyoz_type: NO_DATA_ACCESS, "
+ "r_def: NO_DATA_ACCESS, undf_w1: READ, undf_w2: READ, "
"undf_w3: READ, weights_xy_qr: READ, weights_z_qr: READ" == str(vai))
diff --git a/src/psyclone/tests/dependency_test.py b/src/psyclone/tests/dependency_test.py
index 9a5bd179fd..a132da9d54 100644
--- a/src/psyclone/tests/dependency_test.py
+++ b/src/psyclone/tests/dependency_test.py
@@ -228,7 +228,7 @@ def test_nemo_array_range(fortran_reader):
def test_goloop():
''' Check the handling of non-NEMO do loops.
TODO #440: Does not work atm, GOLoops also have start/stop as
- strings, which are even not defined. Only after gen_code() is called will
+ strings, which are even not defined. Only after lowering is called will
they be defined.
'''
@@ -282,12 +282,6 @@ def test_lfric():
psy = PSyFactory("lfric", distributed_memory=False).create(info)
invoke = psy.invokes.get('invoke_0_testkern_type')
schedule = invoke.schedule
- # TODO #1010 In the LFRic API, the loop bounds are created at code-
- # generation time and therefore we cannot look at dependencies until that
- # is under way. Ultimately this will be replaced by a
- # `lower_to_language_level` call.
- # pylint: disable=pointless-statement
- psy.gen
var_accesses = VariablesAccessInfo(schedule)
assert str(var_accesses) == (
"a: READ, cell: READ+WRITE, f1_data: READ+WRITE, f2_data: READ, "
@@ -307,16 +301,14 @@ def test_lfric_kern_cma_args():
"27.access_tests.f90"),
api="lfric")
psy = PSyFactory("lfric", distributed_memory=False).create(info)
- # TODO #1010 In the LFRic API, the loop bounds are created at code-
- # generation time and therefore we cannot look at dependencies until that
- # is under way. Ultimately this will be replaced by a
- # `lower_to_language_level` call.
- # pylint: disable=pointless-statement
- psy.gen
invoke_read = psy.invokes.get('invoke_read')
invoke_write = psy.invokes.get('invoke_write')
- var_accesses_read = VariablesAccessInfo(invoke_read.schedule)
- var_accesses_write = VariablesAccessInfo(invoke_write.schedule)
+ invoke_read.setup_psy_layer_symbols()
+ invoke_write.setup_psy_layer_symbols()
+ var_accesses_read = VariablesAccessInfo(
+ invoke_read.schedule.coded_kernels())
+ var_accesses_write = VariablesAccessInfo(
+ invoke_write.schedule.coded_kernels())
# Check the parameters that will change access type according to read or
# write declaration of the argument:
@@ -424,12 +416,6 @@ def test_lfric_ref_element():
'''
psy, invoke_info = get_invoke("23.4_ref_elem_all_faces_invoke.f90",
"lfric", idx=0)
- # TODO #1010 In the LFRic API, the loop bounds are created at code-
- # generation time and therefore we cannot look at dependencies until that
- # is under way. Ultimately this will be replaced by a
- # `lower_to_language_level` call.
- # pylint: disable=pointless-statement
- psy.gen
var_info = str(VariablesAccessInfo(invoke_info.schedule))
assert "normals_to_faces: READ" in var_info
assert "out_normals_to_faces: READ" in var_info
@@ -442,12 +428,6 @@ def test_lfric_operator():
'''
psy, invoke_info = get_invoke("6.1_eval_invoke.f90", "lfric", idx=0)
- # TODO #1010 In the LFRic API, the loop bounds are created at code-
- # generation time and therefore we cannot look at dependencies until that
- # is under way. Ultimately this will be replaced by a
- # `lower_to_language_level` call.
- # pylint: disable=pointless-statement
- psy.gen
var_info = str(VariablesAccessInfo(invoke_info.schedule))
assert "f0_data: READ+WRITE" in var_info
assert "cmap_data: READ" in var_info
@@ -455,18 +435,13 @@ def test_lfric_operator():
assert "diff_basis_w1_on_w0: READ" in var_info
-def test_lfric_cma():
+def test_lfric_cma(fortran_writer):
'''Test that parameters related to CMA operators are handled
correctly in the variable usage analysis.
'''
- psy, invoke_info = get_invoke("20.0_cma_assembly.f90", "lfric", idx=0)
- # TODO #1010 In the LFRic API, the loop bounds are created at code-
- # generation time and therefore we cannot look at dependencies until that
- # is under way. Ultimately this will be replaced by a
- # `lower_to_language_level` call.
- # pylint: disable=pointless-statement
- psy.gen
+ _, invoke_info = get_invoke("20.0_cma_assembly.f90", "lfric", idx=0)
+ invoke_info.setup_psy_layer_symbols()
var_info = str(VariablesAccessInfo(invoke_info.schedule))
assert "ncell_2d: READ" in var_info
assert "cma_op1_alpha: READ" in var_info
@@ -476,7 +451,7 @@ def test_lfric_cma():
assert "cma_op1_gamma_p: READ" in var_info
assert "cma_op1_cma_matrix: WRITE" in var_info
assert "cma_op1_ncol: READ" in var_info
- assert "cma_op1_nrow: READ," in var_info
+ assert "cma_op1_nrow: READ" in var_info
assert "cbanded_map_adspc1_lma_op1: READ" in var_info
assert "cbanded_map_adspc2_lma_op1: READ" in var_info
assert "lma_op1_local_stencil: READ" in var_info
@@ -489,12 +464,6 @@ def test_lfric_cma2():
'''
psy, invoke_info = get_invoke("20.1_cma_apply.f90", "lfric", idx=0)
- # TODO #1010 In the LFRic API, the loop bounds are created at code-
- # generation time and therefore we cannot look at dependencies until that
- # is under way. Ultimately this will be replaced by a
- # `lower_to_language_level` call.
- # pylint: disable=pointless-statement
- psy.gen
var_info = str(VariablesAccessInfo(invoke_info.schedule))
assert "cma_indirection_map_aspc1_field_a: READ" in var_info
assert "cma_indirection_map_aspc2_field_b: READ" in var_info
@@ -505,12 +474,6 @@ def test_lfric_stencils():
'''
psy, invoke_info = get_invoke("14.4_halo_vector.f90", "lfric", idx=0)
- # TODO #1010 In the LFRic API, the loop bounds are created at code-
- # generation time and therefore we cannot look at dependencies until that
- # is under way. Ultimately this will be replaced by a
- # `lower_to_language_level` call.
- # pylint: disable=pointless-statement
- psy.gen
var_info = str(VariablesAccessInfo(invoke_info.schedule))
assert "f2_stencil_size: READ" in var_info
assert "f2_stencil_dofmap: READ" in var_info
@@ -523,12 +486,6 @@ def test_lfric_various_basis():
'''
psy, invoke_info = get_invoke("10.3_operator_different_spaces.f90",
"lfric", idx=0)
- # TODO #1010 In the LFRic API, the loop bounds are created at code-
- # generation time and therefore we cannot look at dependencies until that
- # is under way. Ultimately this will be replaced by a
- # `lower_to_language_level` call.
- # pylint: disable=pointless-statement
- psy.gen
var_info = str(VariablesAccessInfo(invoke_info.schedule))
assert "basis_w3_qr: READ" in var_info
assert "diff_basis_w0_qr: READ" in var_info
@@ -546,12 +503,6 @@ def test_lfric_field_bc_kernel():
'''
psy, invoke_info = get_invoke("12.2_enforce_bc_kernel.f90",
"lfric", idx=0)
- # TODO #1010 In the LFRic API, the loop bounds are created at code-
- # generation time and therefore we cannot look at dependencies until that
- # is under way. Ultimately this will be replaced by a
- # `lower_to_language_level` call.
- # pylint: disable=pointless-statement
- psy.gen
var_info = str(VariablesAccessInfo(invoke_info.schedule))
assert "boundary_dofs_a: READ" in var_info
@@ -563,12 +514,6 @@ def test_lfric_stencil_xory_vector():
'''
psy, invoke_info = get_invoke("14.4.2_halo_vector_xory.f90",
"lfric", idx=0)
- # TODO #1010 In the LFRic API, the loop bounds are created at code-
- # generation time and therefore we cannot look at dependencies until that
- # is under way. Ultimately this will be replaced by a
- # `lower_to_language_level` call.
- # pylint: disable=pointless-statement
- psy.gen
var_info = str(VariablesAccessInfo(invoke_info.schedule))
assert "f2_direction: READ" in var_info
diff --git a/src/psyclone/tests/domain/common/transformations/alg_trans_test.py b/src/psyclone/tests/domain/common/transformations/alg_trans_test.py
index 9257133d78..8ef459fa54 100644
--- a/src/psyclone/tests/domain/common/transformations/alg_trans_test.py
+++ b/src/psyclone/tests/domain/common/transformations/alg_trans_test.py
@@ -38,7 +38,6 @@
Algorithm PSyIR.
'''
-from __future__ import absolute_import
import pytest
from psyclone.psyir.transformations import TransformationError
diff --git a/src/psyclone/tests/domain/common/transformations/kernel_module_inline_trans_test.py b/src/psyclone/tests/domain/common/transformations/kernel_module_inline_trans_test.py
index d6d36a7fad..37d5e68013 100644
--- a/src/psyclone/tests/domain/common/transformations/kernel_module_inline_trans_test.py
+++ b/src/psyclone/tests/domain/common/transformations/kernel_module_inline_trans_test.py
@@ -442,26 +442,20 @@ def test_module_inline_apply_transformation(tmpdir, fortran_writer):
assert (kern_call.ancestor(Container).symbol_table.
lookup("compute_cv_code").is_modulevar)
- # We should see it in the output of both:
- # - the backend
- code = fortran_writer(schedule.root)
+ # Generate the code
+ code = str(psy.gen)
assert 'subroutine compute_cv_code(i, j, cv, p, v)' in code
- # - the gen_code
- gen = str(psy.gen)
- assert 'SUBROUTINE compute_cv_code(i, j, cv, p, v)' in gen
-
- # And the import has been remove from both
- # check that the associated use no longer exists
- assert 'use compute_cv_mod, only: compute_cv_code' not in code
- assert 'USE compute_cv_mod, ONLY: compute_cv_code' not in gen
+ # And the import has been remove from both, so check that the associated
+ # use no longer exists
+ assert 'use compute_cv_mod' not in code.lower()
- # Do the gen_code check again because repeating the call resets some
- # aspects and we need to see if the second call still works as expected
+ # Do the check again because repeating the call resets some aspects and we
+ # need to see if the second call still works as expected
gen = str(psy.gen)
- assert 'SUBROUTINE compute_cv_code(i, j, cv, p, v)' in gen
- assert 'USE compute_cv_mod, ONLY: compute_cv_code' not in gen
- assert gen.count("SUBROUTINE compute_cv_code(") == 1
+ assert 'subroutine compute_cv_code(i, j, cv, p, v)' in gen
+ assert 'use compute_cv_mod' not in gen
+ assert gen.count("subroutine compute_cv_code(") == 1
# And it is valid code
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -477,8 +471,8 @@ def test_module_inline_apply_kernel_in_multiple_invokes(tmpdir):
# By default the kernel is imported once per invoke
gen = str(psy.gen)
- assert gen.count("USE testkern_qr_mod, ONLY: testkern_qr_code") == 2
- assert gen.count("END SUBROUTINE testkern_qr_code") == 0
+ assert gen.count("use testkern_qr_mod, only : testkern_qr_code") == 2
+ assert gen.count("end subroutine testkern_qr_code") == 0
# Module inline kernel in invoke 1
inline_trans = KernelModuleInlineTrans()
@@ -490,8 +484,8 @@ def test_module_inline_apply_kernel_in_multiple_invokes(tmpdir):
# After this, one invoke uses the inlined top-level subroutine
# and the other imports it (shadowing the top-level symbol)
- assert gen.count("USE testkern_qr_mod, ONLY: testkern_qr_code") == 1
- assert gen.count("END SUBROUTINE testkern_qr_code") == 1
+ assert gen.count("use testkern_qr_mod, only : testkern_qr_code") == 1
+ assert gen.count("end subroutine testkern_qr_code") == 1
# Module inline kernel in invoke 2
schedule1 = psy.invokes.invoke_list[1].schedule
@@ -501,8 +495,8 @@ def test_module_inline_apply_kernel_in_multiple_invokes(tmpdir):
gen = str(psy.gen)
# After this, no imports are remaining and both use the same
# top-level implementation
- assert gen.count("USE testkern_qr_mod, ONLY: testkern_qr_code") == 0
- assert gen.count("END SUBROUTINE testkern_qr_code") == 1
+ assert gen.count("use testkern_qr_mod, only : testkern_qr_code") == 0
+ assert gen.count("end subroutine testkern_qr_code") == 1
# And it is valid code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -519,11 +513,11 @@ def test_module_inline_apply_with_sub_use(tmpdir):
inline_trans.apply(kern_call)
gen = str(psy.gen)
# check that the subroutine has been inlined
- assert 'SUBROUTINE bc_ssh_code(ji, jj, istep, ssha, tmask)' in gen
+ assert 'subroutine bc_ssh_code(ji, jj, istep, ssha, tmask)' in gen
# check that the use within the subroutine exists
- assert 'USE grid_mod' in gen
+ assert 'use grid_mod' in gen
# check that the associated psy use does not exist
- assert 'USE bc_ssh_mod, ONLY: bc_ssh_code' not in gen
+ assert 'use bc_ssh_mod' not in gen
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -539,11 +533,11 @@ def test_module_inline_apply_same_kernel(tmpdir):
inline_trans.apply(kern_call)
gen = str(psy.gen)
# check that the subroutine has been inlined
- assert 'SUBROUTINE compute_cu_code(' in gen
+ assert 'subroutine compute_cu_code(' in gen
# check that the associated psy "use" does not exist
- assert 'USE compute_cu_mod, ONLY: compute_cu_code' not in gen
+ assert 'use compute_cu_mod' not in gen
# check that the subroutine has only been inlined once
- count = count_lines(gen, "SUBROUTINE compute_cu_code(")
+ count = count_lines(gen, "subroutine compute_cu_code(")
assert count == 1, "Expecting subroutine to be inlined once"
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -776,9 +770,9 @@ def test_module_inline_lfric(tmpdir, monkeypatch, annexed, dist_mem):
inline_trans.apply(kern_call)
gen = str(psy.gen)
# check that the subroutine has been inlined
- assert 'SUBROUTINE ru_code(' in gen
+ assert 'subroutine ru_code(' in gen
# check that the associated psy "use" does not exist
- assert 'USE ru_kernel_mod, only : ru_code' not in gen
+ assert 'use ru_kernel_mod' not in gen
# And it is valid code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -797,8 +791,8 @@ def test_module_inline_with_interfaces(tmpdir):
gen = str(psy.gen)
# Both the caller and the callee are in the file and use the specialized
# implementation name.
- assert "CALL mixed_code_64(" in gen
- assert "SUBROUTINE mixed_code_64(" in gen
+ assert "call mixed_code_64(" in gen
+ assert "subroutine mixed_code_64(" in gen
# And it is valid code
assert LFRicBuild(tmpdir).code_compiles(psy)
diff --git a/src/psyclone/tests/domain/constants_test.py b/src/psyclone/tests/domain/constants_test.py
index 76a4cd63b0..802b326d8b 100644
--- a/src/psyclone/tests/domain/constants_test.py
+++ b/src/psyclone/tests/domain/constants_test.py
@@ -39,8 +39,6 @@
'''Tests for class storing API-specific constants.'''
-from __future__ import absolute_import, print_function
-
from psyclone.configuration import Config
from psyclone.domain.lfric import LFRicConstants
diff --git a/src/psyclone/tests/domain/gocean/goloop_test.py b/src/psyclone/tests/domain/gocean/goloop_test.py
index 25336674cd..77f477165c 100644
--- a/src/psyclone/tests/domain/gocean/goloop_test.py
+++ b/src/psyclone/tests/domain/gocean/goloop_test.py
@@ -55,46 +55,17 @@
API = "gocean"
-def test_goloop_no_parent():
- ''' Attempt to generate code for a loop that has no GOInvokeSchedule
- as a parent '''
- # Attempt to create a GOLoop within a generic Schedule
- schedule = Schedule()
- with pytest.raises(GenerationError) as err:
- goloop = GOLoop(loop_type="inner", parent=schedule)
- assert ("GOLoops must always be constructed with a parent which is inside "
- "(directly or indirectly) of a GOInvokeSchedule" in str(err.value))
-
- # Now create it in a GOInvokeSchedule but then detach it
- schedule = GOInvokeSchedule.create('name')
- goloop = GOLoop(loop_type="inner", parent=schedule)
- schedule.children = [goloop]
- # Now remove parent and children
- goloop.detach()
-
- # Try and generate the code for this loop even though it
- # has no parent schedule and no children
- with pytest.raises(GenerationError):
- goloop.gen_code(None)
-
-
-def test_goloop_no_children():
- ''' Attempt to generate code for a loop that has no child
- kernel calls '''
- gosched = GOInvokeSchedule.create('name')
- goloop = GOLoop(parent=gosched, loop_type="outer")
- # Try and generate the code for this loop even though it
- # has no children
- with pytest.raises(GenerationError) as err:
- goloop.gen_code(None)
- assert "Cannot find the GOcean Kernel enclosed by this loop" \
- in str(err.value)
-
-
def test_goloop_create(monkeypatch):
''' Test that the GOLoop create method populates the relevant attributes
and creates the loop children. '''
+ # The parent must be a GOInvokeSchedule
+ with pytest.raises(GenerationError) as err:
+ goloop = GOLoop(loop_type="inner", parent=Schedule())
+ assert ("GOLoops must always be constructed with a parent which is inside"
+ " (directly or indirectly) of a GOInvokeSchedule"
+ in str(err.value))
+
# Monkeypatch the called GOLoops methods as this will be tested separately
monkeypatch.setattr(GOLoop, "lower_bound",
lambda x: Literal("10", INTEGER_TYPE))
diff --git a/src/psyclone/tests/domain/gocean/kernel/go_kernel_arg_test.py b/src/psyclone/tests/domain/gocean/kernel/go_kernel_arg_test.py
index 976aa73976..560474ddd5 100644
--- a/src/psyclone/tests/domain/gocean/kernel/go_kernel_arg_test.py
+++ b/src/psyclone/tests/domain/gocean/kernel/go_kernel_arg_test.py
@@ -97,7 +97,7 @@ def test_gokernelarguments_append():
# And the generated code looks as expected
generated_code = str(psy.gen)
- assert "CALL compute_cu_code(i, j, cu_fld%data, p_fld%data, u_fld%data," \
+ assert "call compute_cu_code(i, j, cu_fld%data, p_fld%data, u_fld%data," \
" var1, var2)" in generated_code
diff --git a/src/psyclone/tests/domain/gocean/transformations/globalstoargs_test.py b/src/psyclone/tests/domain/gocean/transformations/globalstoargs_test.py
index 6e0f25f91f..4dc5a20087 100644
--- a/src/psyclone/tests/domain/gocean/transformations/globalstoargs_test.py
+++ b/src/psyclone/tests/domain/gocean/transformations/globalstoargs_test.py
@@ -180,8 +180,8 @@ def set_to_real(variable):
# Check that the PSy-layer generated code now contains the use statement
# and argument call
generated_code = str(psy.gen)
- assert "USE model_mod, ONLY: rdt" in generated_code
- assert "CALL kernel_with_use_code(i, j, oldu_fld, cu_fld%data, " \
+ assert "use model_mod, only : rdt" in generated_code
+ assert "call kernel_with_use_code(i, j, oldu_fld, cu_fld%data, " \
"cu_fld%grid%tmask, rdt)" in generated_code
assert invoke.schedule.symbol_table.lookup("model_mod")
assert invoke.schedule.symbol_table.lookup("rdt")
@@ -312,19 +312,19 @@ def create_data_symbol(arg):
# The following assert checks that imports from the same module are
# imported, since the kernels are marked as modified, new suffixes are
# given in order to differentiate each of them.
- assert ("USE kernel_with_use_1_mod, ONLY: kernel_with_use_1_code\n"
+ assert ("use kernel_with_use_1_mod, only : kernel_with_use_1_code\n"
in generated_code)
- assert ("USE kernel_with_use2_0_mod, ONLY: kernel_with_use2_0_code\n"
+ assert ("use kernel_with_use2_0_mod, only : kernel_with_use2_0_code\n"
in generated_code)
- assert ("USE kernel_with_use_0_mod, ONLY: kernel_with_use_0_code\n"
+ assert ("use kernel_with_use_0_mod, only : kernel_with_use_0_code\n"
in generated_code)
# Check the kernel calls have the imported symbol passed as last argument
- assert ("CALL kernel_with_use_0_code(i, j, oldu_fld, cu_fld%data, "
+ assert ("call kernel_with_use_0_code(i, j, oldu_fld, cu_fld%data, "
"cu_fld%grid%tmask, rdt, magic)" in generated_code)
- assert ("CALL kernel_with_use_1_code(i, j, oldu_fld, cu_fld%data, "
+ assert ("call kernel_with_use_1_code(i, j, oldu_fld, cu_fld%data, "
"cu_fld%grid%tmask, rdt, magic)" in generated_code)
- assert ("CALL kernel_with_use2_0_code(i, j, oldu_fld, cu_fld%data, "
+ assert ("call kernel_with_use2_0_code(i, j, oldu_fld, cu_fld%data, "
"cu_fld%grid%tmask, cbfr, rdt)" in generated_code)
diff --git a/src/psyclone/tests/domain/gocean/transformations/gocean1p0_transformations_test.py b/src/psyclone/tests/domain/gocean/transformations/gocean1p0_transformations_test.py
index 0740b5c41a..b2fa77ebbb 100644
--- a/src/psyclone/tests/domain/gocean/transformations/gocean1p0_transformations_test.py
+++ b/src/psyclone/tests/domain/gocean/transformations/gocean1p0_transformations_test.py
@@ -47,7 +47,7 @@
from psyclone.gocean1p0 import GOKern
from psyclone.parse import ModuleManager
from psyclone.psyGen import Kern
-from psyclone.psyir.nodes import Loop, Routine
+from psyclone.psyir.nodes import Loop
from psyclone.psyir.transformations import (
LoopFuseTrans, LoopTrans, TransformationError)
from psyclone.transformations import ACCRoutineTrans, \
@@ -486,7 +486,7 @@ def test_omp_region_before_loops_trans(tmpdir):
omp_region_idx = idx
if '!$omp do' in line:
omp_do_idx = idx
- if 'DO j =' in line:
+ if 'do j =' in line:
break
assert omp_region_idx != -1
@@ -523,7 +523,7 @@ def test_omp_region_after_loops_trans(tmpdir):
omp_region_idx = idx
if '!$omp do' in line:
omp_do_idx = idx
- if 'DO j =' in line:
+ if 'do j =' in line:
break
assert omp_region_idx != -1
@@ -982,10 +982,10 @@ def test_module_noinline_default(tmpdir):
dist_mem=False)
gen = str(psy.gen)
# check that the subroutine has not been inlined
- assert 'SUBROUTINE compute_cu_code(i, j, cu, p, u)' not in gen
+ assert 'subroutine compute_cu_code(i, j, cu, p, u)' not in gen
# check that the associated use exists (as this is removed when
# inlining)
- assert 'USE compute_cu_mod, ONLY: compute_cu_code' in gen
+ assert 'use compute_cu_mod, only : compute_cu_code' in gen
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -1030,7 +1030,7 @@ def test_acc_parallel_trans(tmpdir):
for idx, line in enumerate(code.split('\n')):
if "!$acc parallel default(present)" in line:
acc_idx = idx
- if (do_idx == -1) and "DO j" in line:
+ if (do_idx == -1) and "do j" in line:
do_idx = idx
if "!$acc end parallel" in line:
acc_end_idx = idx
@@ -1060,14 +1060,14 @@ def test_acc_parallel_trans_dm():
accdt.apply(schedule)
code = str(psy.gen)
# Check that the start of the parallel region is in the right place.
- assert (" !$acc parallel default(present)\n"
- " DO j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
+ assert (" !$acc parallel default(present)\n"
+ " do j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
in code)
# Check that the end parallel is generated correctly.
- assert (" END DO\n"
- " END DO\n"
- " !$acc end parallel\n\n"
- " END SUBROUTINE invoke_0\n" in code)
+ assert (" enddo\n"
+ " enddo\n"
+ " !$acc end parallel\n\n"
+ " end subroutine invoke_0\n" in code)
def test_acc_incorrect_parallel_trans():
@@ -1148,7 +1148,7 @@ def test_acc_data_copyin(tmpdir):
code = str(psy.gen)
assert (
- " !$acc enter data copyin(cu_fld,cu_fld%data,cv_fld,cv_fld%data,"
+ " !$acc enter data copyin(cu_fld,cu_fld%data,cv_fld,cv_fld%data,"
"p_fld,p_fld%data,u_fld,u_fld%data,unew_fld,unew_fld%data,"
"uold_fld,uold_fld%data,v_fld,v_fld%data)\n" in code)
@@ -1185,7 +1185,7 @@ def test_acc_data_grid_copyin(tmpdir):
for obj in ["u_fld", "cu_fld", "du_fld", "d_fld"]:
assert f"{obj}%data_on_device = .true." in code
# Check that we have no acc_update_device calls
- assert "CALL acc_update_device" not in code
+ assert "call acc_update_device" not in code
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -1324,30 +1324,25 @@ def test_acc_enter_directive_infrastructure_setup():
# Check that the read_from_device routine has been generated
expected = """\
- SUBROUTINE read_from_device(from, to, startx, starty, nx, ny, blocking)
- USE iso_c_binding, ONLY: c_ptr
- USE kind_params_mod, ONLY: go_wp
- TYPE(c_ptr), intent(in) :: from
- REAL(KIND=go_wp), DIMENSION(:, :), INTENT(INOUT), TARGET :: to
- INTEGER, intent(in) :: startx
- INTEGER, intent(in) :: starty
- INTEGER, intent(in) :: nx
- INTEGER, intent(in) :: ny
- LOGICAL, intent(in) :: blocking
-
- !$acc update host(to)
-
- END SUBROUTINE read_from_device"""
+ subroutine read_from_device(from, to, startx, starty, nx, ny, blocking)
+ use iso_c_binding, only : c_ptr
+ use kind_params_mod, only : go_wp
+ type(c_ptr), intent(in) :: from
+ REAL(KIND = go_wp), DIMENSION(:, :), INTENT(INOUT), TARGET :: to
+ integer, intent(in) :: startx
+ integer, intent(in) :: starty
+ integer, intent(in) :: nx
+ integer, intent(in) :: ny
+ logical, intent(in) :: blocking
+
+ !$acc update host(to)
+
+ end subroutine read_from_device"""
assert expected in gen
- # Check that the routine has been introduced to the tree (with the
- # appropriate tag) and only once (even if there are 3 fields)
- symbol = schedule.symbol_table.lookup_with_tag("openacc_read_func")
- assert symbol.name == "read_from_device"
- assert len(schedule.parent.children) == 2
- assert isinstance(schedule.parent.children[1], Routine)
- assert schedule.parent.children[1].name == symbol.name
- count = count_lines(gen, "SUBROUTINE read_from_device(")
+ # Check that the routine has been introduced to the tree only once
+ # (even if there are 3 fields)
+ count = count_lines(gen, "subroutine read_from_device(")
assert count == 1
# Check that each field data_on_device and read_from_device_f have been
@@ -1410,11 +1405,11 @@ def test_acc_collapse(tmpdir):
accdata.apply(schedule)
gen = str(psy.gen)
- assert (" !$acc parallel default(present)\n"
- " !$acc loop independent collapse(2)\n"
- " DO j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
- " DO i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1\n"
- " CALL compute_cu_code(i, j, cu_fld%data, p_fld%data, "
+ assert (" !$acc parallel default(present)\n"
+ " !$acc loop independent collapse(2)\n"
+ " do j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
+ " do i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1\n"
+ " call compute_cu_code(i, j, cu_fld%data, p_fld%data, "
"u_fld%data)\n" in gen)
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -1434,8 +1429,8 @@ def test_acc_indep(tmpdir):
accdata.apply(schedule)
# Check the generated code
gen = str(psy.gen)
- assert "!$acc loop\n DO j = cu_fld%internal%ystart," in gen
- assert "!$acc loop independent\n DO j = cv_fld%internal%ystart" in gen
+ assert "!$acc loop\n do j = cu_fld%internal%ystart," in gen
+ assert "!$acc loop independent\n do j = cv_fld%internal%ystart" in gen
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -1454,9 +1449,9 @@ def test_acc_loop_seq():
accdata.apply(schedule)
# Check the generated code
gen = str(psy.gen).lower()
- assert (" !$acc parallel default(present)\n"
- " !$acc loop seq\n"
- " do j = cu_fld%internal%ystart, cu_fld%internal%ystop"
+ assert (" !$acc parallel default(present)\n"
+ " !$acc loop seq\n"
+ " do j = cu_fld%internal%ystart, cu_fld%internal%ystop"
", 1\n" in gen)
diff --git a/src/psyclone/tests/domain/gocean/transformations/gocean_const_loop_bounds_trans_test.py b/src/psyclone/tests/domain/gocean/transformations/gocean_const_loop_bounds_trans_test.py
index 4effbb685e..a23426a61e 100644
--- a/src/psyclone/tests/domain/gocean/transformations/gocean_const_loop_bounds_trans_test.py
+++ b/src/psyclone/tests/domain/gocean/transformations/gocean_const_loop_bounds_trans_test.py
@@ -38,7 +38,6 @@
''' Module containing tests of GOConstLoopBoundsTrans when using the
GOcean 1.0 API '''
-from __future__ import absolute_import
import pytest
from psyclone.errors import InternalError
from psyclone.gocean1p0 import GOLoop
@@ -88,10 +87,10 @@ def test_const_loop_bounds_trans(tmpdir):
# First check that the generated code doesn't use constant loop
# bounds by default.
gen = str(psy.gen)
- assert "DO j = cv_fld%internal%ystart, cv_fld%internal%ystop" in gen
- assert "DO i = cv_fld%internal%xstart, cv_fld%internal%xstop" in gen
- assert "DO j = p_fld%whole%ystart, p_fld%whole%ystop" in gen
- assert "DO i = p_fld%whole%xstart, p_fld%whole%xstop" in gen
+ assert "do j = cv_fld%internal%ystart, cv_fld%internal%ystop" in gen
+ assert "do i = cv_fld%internal%xstart, cv_fld%internal%xstop" in gen
+ assert "do j = p_fld%whole%ystart, p_fld%whole%ystop" in gen
+ assert "do i = p_fld%whole%xstart, p_fld%whole%xstop" in gen
# Next, check the generated code applying the constant loop-bounds
# transformation.
@@ -100,12 +99,12 @@ def test_const_loop_bounds_trans(tmpdir):
schedule = invoke.schedule
cbtrans.apply(schedule)
gen = str(psy.gen)
- assert "INTEGER istop" in gen
- assert "INTEGER jstop" in gen
+ assert "integer :: istop" in gen
+ assert "integer :: jstop" in gen
assert "istop = cv_fld%grid%subdomain%internal%xstop" in gen
assert "jstop = cv_fld%grid%subdomain%internal%ystop" in gen
- assert "DO j = 2, jstop - 1" in gen
- assert "DO i = 2, istop" in gen
+ assert "do j = 2, jstop - 1" in gen
+ assert "do i = 2, istop" in gen
assert GOceanBuild(tmpdir).code_compiles(psy)
diff --git a/src/psyclone/tests/domain/gocean/transformations/gocean_extract_test.py b/src/psyclone/tests/domain/gocean/transformations/gocean_extract_test.py
index 386847e240..970b988ba7 100644
--- a/src/psyclone/tests/domain/gocean/transformations/gocean_extract_test.py
+++ b/src/psyclone/tests/domain/gocean/transformations/gocean_extract_test.py
@@ -224,52 +224,52 @@ def test_single_node_ompparalleldo_gocean1p0():
code = str(psy.gen)
output = """
- CALL extract_psy_data % PreStart("psy_single_invoke_three_kernels", """ \
- """"invoke_0-compute_cv_code-r0", 9, 3)
- CALL extract_psy_data % PreDeclareVariable("cv_fld%internal%xstart", """ \
- """cv_fld % internal % xstart)
- CALL extract_psy_data % PreDeclareVariable("cv_fld%internal%xstop", """ \
- """cv_fld % internal % xstop)
- CALL extract_psy_data % PreDeclareVariable("cv_fld%internal%ystart", """ \
- """cv_fld % internal % ystart)
- CALL extract_psy_data % PreDeclareVariable("cv_fld%internal%ystop", """ \
- """cv_fld % internal % ystop)
- CALL extract_psy_data % PreDeclareVariable("p_fld", p_fld)
- CALL extract_psy_data % PreDeclareVariable("v_fld", v_fld)
- CALL extract_psy_data % PreDeclareVariable("cv_fld", cv_fld)
- CALL extract_psy_data % PreDeclareVariable("i", i)
- CALL extract_psy_data % PreDeclareVariable("j", j)
- CALL extract_psy_data % PreDeclareVariable("cv_fld_post", cv_fld)
- CALL extract_psy_data % PreDeclareVariable("i_post", i)
- CALL extract_psy_data % PreDeclareVariable("j_post", j)
- CALL extract_psy_data % PreEndDeclaration
- CALL extract_psy_data % ProvideVariable("cv_fld%internal%xstart", """ \
- """cv_fld % internal % xstart)
- CALL extract_psy_data % ProvideVariable("cv_fld%internal%xstop", """ \
- """cv_fld % internal % xstop)
- CALL extract_psy_data % ProvideVariable("cv_fld%internal%ystart", """ \
- """cv_fld % internal % ystart)
- CALL extract_psy_data % ProvideVariable("cv_fld%internal%ystop", """ \
- """cv_fld % internal % ystop)
- CALL extract_psy_data % ProvideVariable("p_fld", p_fld)
- CALL extract_psy_data % ProvideVariable("v_fld", v_fld)
- CALL extract_psy_data % ProvideVariable("cv_fld", cv_fld)
- CALL extract_psy_data % ProvideVariable("i", i)
- CALL extract_psy_data % ProvideVariable("j", j)
- CALL extract_psy_data % PreEnd
- !$omp parallel do default(shared), private(i,j), schedule(static)
- DO j = cv_fld%internal%ystart, cv_fld%internal%ystop, 1
- DO i = cv_fld%internal%xstart, cv_fld%internal%xstop, 1
- CALL compute_cv_code(i, j, cv_fld%data, p_fld%data, v_fld%data)
- END DO
- END DO
- !$omp end parallel do
- CALL extract_psy_data % PostStart
- CALL extract_psy_data % ProvideVariable("cv_fld_post", cv_fld)
- CALL extract_psy_data % ProvideVariable("i_post", i)
- CALL extract_psy_data % ProvideVariable("j_post", j)
- CALL extract_psy_data % PostEnd
- """
+ CALL extract_psy_data % PreStart("psy_single_invoke_three_kernels", """ \
+ """"invoke_0-compute_cv_code-r0", 9, 3)
+ CALL extract_psy_data % PreDeclareVariable("cv_fld%internal%xstart", """ \
+ """cv_fld % internal % xstart)
+ CALL extract_psy_data % PreDeclareVariable("cv_fld%internal%xstop", """ \
+ """cv_fld % internal % xstop)
+ CALL extract_psy_data % PreDeclareVariable("cv_fld%internal%ystart", """ \
+ """cv_fld % internal % ystart)
+ CALL extract_psy_data % PreDeclareVariable("cv_fld%internal%ystop", """ \
+ """cv_fld % internal % ystop)
+ CALL extract_psy_data % PreDeclareVariable("p_fld", p_fld)
+ CALL extract_psy_data % PreDeclareVariable("v_fld", v_fld)
+ CALL extract_psy_data % PreDeclareVariable("cv_fld", cv_fld)
+ CALL extract_psy_data % PreDeclareVariable("i", i)
+ CALL extract_psy_data % PreDeclareVariable("j", j)
+ CALL extract_psy_data % PreDeclareVariable("cv_fld_post", cv_fld)
+ CALL extract_psy_data % PreDeclareVariable("i_post", i)
+ CALL extract_psy_data % PreDeclareVariable("j_post", j)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("cv_fld%internal%xstart", """ \
+ """cv_fld % internal % xstart)
+ CALL extract_psy_data % ProvideVariable("cv_fld%internal%xstop", """ \
+ """cv_fld % internal % xstop)
+ CALL extract_psy_data % ProvideVariable("cv_fld%internal%ystart", """ \
+ """cv_fld % internal % ystart)
+ CALL extract_psy_data % ProvideVariable("cv_fld%internal%ystop", """ \
+ """cv_fld % internal % ystop)
+ CALL extract_psy_data % ProvideVariable("p_fld", p_fld)
+ CALL extract_psy_data % ProvideVariable("v_fld", v_fld)
+ CALL extract_psy_data % ProvideVariable("cv_fld", cv_fld)
+ CALL extract_psy_data % ProvideVariable("i", i)
+ CALL extract_psy_data % ProvideVariable("j", j)
+ CALL extract_psy_data % PreEnd
+ !$omp parallel do default(shared), private(i,j), schedule(static)
+ do j = cv_fld%internal%ystart, cv_fld%internal%ystop, 1
+ do i = cv_fld%internal%xstart, cv_fld%internal%xstop, 1
+ call compute_cv_code(i, j, cv_fld%data, p_fld%data, v_fld%data)
+ enddo
+ enddo
+ !$omp end parallel do
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("cv_fld_post", cv_fld)
+ CALL extract_psy_data % ProvideVariable("i_post", i)
+ CALL extract_psy_data % ProvideVariable("j_post", j)
+ CALL extract_psy_data % PostEnd
+ """
assert output in code
@@ -302,40 +302,40 @@ def test_single_node_ompparalleldo_gocean1p0_const_loop():
code = str(psy.gen)
output = """
- CALL extract_psy_data % PreStart("psy_single_invoke_three_kernels", """ \
- """"invoke_0-compute_cv_code-r0", 7, 3)
- CALL extract_psy_data % PreDeclareVariable("istop", istop)
- CALL extract_psy_data % PreDeclareVariable("jstop", jstop)
- CALL extract_psy_data % PreDeclareVariable("p_fld", p_fld)
- CALL extract_psy_data % PreDeclareVariable("v_fld", v_fld)
- CALL extract_psy_data % PreDeclareVariable("cv_fld", cv_fld)
- CALL extract_psy_data % PreDeclareVariable("i", i)
- CALL extract_psy_data % PreDeclareVariable("j", j)
- CALL extract_psy_data % PreDeclareVariable("cv_fld_post", cv_fld)
- CALL extract_psy_data % PreDeclareVariable("i_post", i)
- CALL extract_psy_data % PreDeclareVariable("j_post", j)
- CALL extract_psy_data % PreEndDeclaration
- CALL extract_psy_data % ProvideVariable("istop", istop)
- CALL extract_psy_data % ProvideVariable("jstop", jstop)
- CALL extract_psy_data % ProvideVariable("p_fld", p_fld)
- CALL extract_psy_data % ProvideVariable("v_fld", v_fld)
- CALL extract_psy_data % ProvideVariable("cv_fld", cv_fld)
- CALL extract_psy_data % ProvideVariable("i", i)
- CALL extract_psy_data % ProvideVariable("j", j)
- CALL extract_psy_data % PreEnd
- !$omp parallel do default(shared), private(i,j), schedule(static)
- DO j = 2, jstop + 1, 1
- DO i = 2, istop, 1
- CALL compute_cv_code(i, j, cv_fld%data, p_fld%data, v_fld%data)
- END DO
- END DO
- !$omp end parallel do
- CALL extract_psy_data % PostStart
- CALL extract_psy_data % ProvideVariable("cv_fld_post", cv_fld)
- CALL extract_psy_data % ProvideVariable("i_post", i)
- CALL extract_psy_data % ProvideVariable("j_post", j)
- CALL extract_psy_data % PostEnd
- """
+ CALL extract_psy_data % PreStart("psy_single_invoke_three_kernels", """ \
+ """"invoke_0-compute_cv_code-r0", 7, 3)
+ CALL extract_psy_data % PreDeclareVariable("istop", istop)
+ CALL extract_psy_data % PreDeclareVariable("jstop", jstop)
+ CALL extract_psy_data % PreDeclareVariable("p_fld", p_fld)
+ CALL extract_psy_data % PreDeclareVariable("v_fld", v_fld)
+ CALL extract_psy_data % PreDeclareVariable("cv_fld", cv_fld)
+ CALL extract_psy_data % PreDeclareVariable("i", i)
+ CALL extract_psy_data % PreDeclareVariable("j", j)
+ CALL extract_psy_data % PreDeclareVariable("cv_fld_post", cv_fld)
+ CALL extract_psy_data % PreDeclareVariable("i_post", i)
+ CALL extract_psy_data % PreDeclareVariable("j_post", j)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("istop", istop)
+ CALL extract_psy_data % ProvideVariable("jstop", jstop)
+ CALL extract_psy_data % ProvideVariable("p_fld", p_fld)
+ CALL extract_psy_data % ProvideVariable("v_fld", v_fld)
+ CALL extract_psy_data % ProvideVariable("cv_fld", cv_fld)
+ CALL extract_psy_data % ProvideVariable("i", i)
+ CALL extract_psy_data % ProvideVariable("j", j)
+ CALL extract_psy_data % PreEnd
+ !$omp parallel do default(shared), private(i,j), schedule(static)
+ do j = 2, jstop + 1, 1
+ do i = 2, istop, 1
+ call compute_cv_code(i, j, cv_fld%data, p_fld%data, v_fld%data)
+ enddo
+ enddo
+ !$omp end parallel do
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("cv_fld_post", cv_fld)
+ CALL extract_psy_data % ProvideVariable("i_post", i)
+ CALL extract_psy_data % ProvideVariable("j_post", j)
+ CALL extract_psy_data % PostEnd
+ """
assert output in code
@@ -371,55 +371,55 @@ def test_node_list_ompparallel_gocean1p0():
code = str(psy.gen)
output = """
- CALL extract_psy_data % PreStart("psy_single_invoke_three_kernels", """ \
- """"invoke_0-r0", 9, 4)
- CALL extract_psy_data % PreDeclareVariable("istop", istop)
- CALL extract_psy_data % PreDeclareVariable("jstop", jstop)
- CALL extract_psy_data % PreDeclareVariable("p_fld", p_fld)
- CALL extract_psy_data % PreDeclareVariable("u_fld", u_fld)
- CALL extract_psy_data % PreDeclareVariable("v_fld", v_fld)
- CALL extract_psy_data % PreDeclareVariable("cu_fld", cu_fld)
- CALL extract_psy_data % PreDeclareVariable("cv_fld", cv_fld)
- CALL extract_psy_data % PreDeclareVariable("i", i)
- CALL extract_psy_data % PreDeclareVariable("j", j)
- CALL extract_psy_data % PreDeclareVariable("cu_fld_post", cu_fld)
- CALL extract_psy_data % PreDeclareVariable("cv_fld_post", cv_fld)
- CALL extract_psy_data % PreDeclareVariable("i_post", i)
- CALL extract_psy_data % PreDeclareVariable("j_post", j)
- CALL extract_psy_data % PreEndDeclaration
- CALL extract_psy_data % ProvideVariable("istop", istop)
- CALL extract_psy_data % ProvideVariable("jstop", jstop)
- CALL extract_psy_data % ProvideVariable("p_fld", p_fld)
- CALL extract_psy_data % ProvideVariable("u_fld", u_fld)
- CALL extract_psy_data % ProvideVariable("v_fld", v_fld)
- CALL extract_psy_data % ProvideVariable("cu_fld", cu_fld)
- CALL extract_psy_data % ProvideVariable("cv_fld", cv_fld)
- CALL extract_psy_data % ProvideVariable("i", i)
- CALL extract_psy_data % ProvideVariable("j", j)
- CALL extract_psy_data % PreEnd
- !$omp parallel default(shared), private(i,j)
- !$omp do schedule(static)
- DO j = 2, jstop, 1
- DO i = 2, istop + 1, 1
- CALL compute_cu_code(i, j, cu_fld%data, p_fld%data, u_fld%data)
- END DO
- END DO
- !$omp end do
- !$omp do schedule(static)
- DO j = 2, jstop + 1, 1
- DO i = 2, istop, 1
- CALL compute_cv_code(i, j, cv_fld%data, p_fld%data, v_fld%data)
- END DO
- END DO
- !$omp end do
- !$omp end parallel
- CALL extract_psy_data % PostStart
- CALL extract_psy_data % ProvideVariable("cu_fld_post", cu_fld)
- CALL extract_psy_data % ProvideVariable("cv_fld_post", cv_fld)
- CALL extract_psy_data % ProvideVariable("i_post", i)
- CALL extract_psy_data % ProvideVariable("j_post", j)
- CALL extract_psy_data % PostEnd
- """
+ CALL extract_psy_data % PreStart("psy_single_invoke_three_kernels", """ \
+ """"invoke_0-r0", 9, 4)
+ CALL extract_psy_data % PreDeclareVariable("istop", istop)
+ CALL extract_psy_data % PreDeclareVariable("jstop", jstop)
+ CALL extract_psy_data % PreDeclareVariable("p_fld", p_fld)
+ CALL extract_psy_data % PreDeclareVariable("u_fld", u_fld)
+ CALL extract_psy_data % PreDeclareVariable("v_fld", v_fld)
+ CALL extract_psy_data % PreDeclareVariable("cu_fld", cu_fld)
+ CALL extract_psy_data % PreDeclareVariable("cv_fld", cv_fld)
+ CALL extract_psy_data % PreDeclareVariable("i", i)
+ CALL extract_psy_data % PreDeclareVariable("j", j)
+ CALL extract_psy_data % PreDeclareVariable("cu_fld_post", cu_fld)
+ CALL extract_psy_data % PreDeclareVariable("cv_fld_post", cv_fld)
+ CALL extract_psy_data % PreDeclareVariable("i_post", i)
+ CALL extract_psy_data % PreDeclareVariable("j_post", j)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("istop", istop)
+ CALL extract_psy_data % ProvideVariable("jstop", jstop)
+ CALL extract_psy_data % ProvideVariable("p_fld", p_fld)
+ CALL extract_psy_data % ProvideVariable("u_fld", u_fld)
+ CALL extract_psy_data % ProvideVariable("v_fld", v_fld)
+ CALL extract_psy_data % ProvideVariable("cu_fld", cu_fld)
+ CALL extract_psy_data % ProvideVariable("cv_fld", cv_fld)
+ CALL extract_psy_data % ProvideVariable("i", i)
+ CALL extract_psy_data % ProvideVariable("j", j)
+ CALL extract_psy_data % PreEnd
+ !$omp parallel default(shared), private(i,j)
+ !$omp do schedule(static)
+ do j = 2, jstop, 1
+ do i = 2, istop + 1, 1
+ call compute_cu_code(i, j, cu_fld%data, p_fld%data, u_fld%data)
+ enddo
+ enddo
+ !$omp end do
+ !$omp do schedule(static)
+ do j = 2, jstop + 1, 1
+ do i = 2, istop, 1
+ call compute_cv_code(i, j, cv_fld%data, p_fld%data, v_fld%data)
+ enddo
+ enddo
+ !$omp end do
+ !$omp end parallel
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("cu_fld_post", cu_fld)
+ CALL extract_psy_data % ProvideVariable("cv_fld_post", cv_fld)
+ CALL extract_psy_data % ProvideVariable("i_post", i)
+ CALL extract_psy_data % ProvideVariable("j_post", j)
+ CALL extract_psy_data % PostEnd
+ """
assert output in code
diff --git a/src/psyclone/tests/domain/gocean/transformations/gocean_opencl_trans_test.py b/src/psyclone/tests/domain/gocean/transformations/gocean_opencl_trans_test.py
index da37206aa5..95d5cb044c 100644
--- a/src/psyclone/tests/domain/gocean/transformations/gocean_opencl_trans_test.py
+++ b/src/psyclone/tests/domain/gocean/transformations/gocean_opencl_trans_test.py
@@ -88,7 +88,7 @@ def test_opencl_compiler_works(kernel_outputdir):
Compile.skip_if_opencl_compilation_disabled()
example_ocl_code = '''
program hello
- USE fortcl
+ use fortcl
write (*,*) "Hello"
end program hello
'''
@@ -146,7 +146,7 @@ def test_ocl_apply(kernel_outputdir):
ocl.apply(schedule)
gen = str(psy.gen)
- assert "USE clfortran" in gen
+ assert "use clfortran" in gen
# Check that the new kernel file have been generated
kernel_files = os.listdir(str(kernel_outputdir))
@@ -258,12 +258,14 @@ def test_invoke_opencl_initialisation(kernel_outputdir, fortran_writer):
call initialise_device_buffer(cu_fld)
call initialise_device_buffer(p_fld)
call initialise_device_buffer(u_fld)
+
! do a set_args now so subsequent writes place the data appropriately
cu_fld_cl_mem = transfer(cu_fld%device_ptr, cu_fld_cl_mem)
p_fld_cl_mem = transfer(p_fld%device_ptr, p_fld_cl_mem)
u_fld_cl_mem = transfer(u_fld%device_ptr, u_fld_cl_mem)
call compute_cu_code_set_args(kernel_compute_cu_code, cu_fld_cl_mem, \
p_fld_cl_mem, u_fld_cl_mem, xstart - 1, xstop - 1, ystart - 1, ystop - 1)
+
! write data to the device'''
assert expected in generated_code
@@ -306,19 +308,19 @@ def test_invoke_opencl_initialisation_grid():
# Check that device grid initialisation routine is generated
expected = '''
- subroutine initialise_grid_device_buffers(field)
- use fortcl, only: create_ronly_buffer
- use iso_c_binding, only: c_size_t
- use field_mod
- type(r2d_field), intent(inout), target :: field
- integer(kind=c_size_t) size_in_bytes
-
- if (.not.c_associated(field%grid%tmask_device)) then
- size_in_bytes = int(field%grid%nx * field%grid%ny, 8) * \
+ subroutine initialise_grid_device_buffers(field)
+ use fortcl, only : create_ronly_buffer
+ use iso_c_binding, only : c_size_t
+ use field_mod
+ type(r2d_field), intent(inout), target :: field
+ integer(kind=c_size_t) :: size_in_bytes
+
+ if (.not.c_associated(field%grid%tmask_device)) then
+ size_in_bytes = int(field%grid%nx * field%grid%ny, 8) * \
c_sizeof(field%grid%tmask(1,1))
- field%grid%tmask_device = transfer(create_ronly_buffer(size_in_bytes),\
+ field%grid%tmask_device = transfer(create_ronly_buffer(size_in_bytes),\
field%grid%tmask_device)
- size_in_bytes = int(field%grid%nx * field%grid%ny, 8) * \
+ size_in_bytes = int(field%grid%nx * field%grid%ny, 8) * \
c_sizeof(field%grid%'''
assert expected in generated_code
@@ -330,71 +332,74 @@ def test_invoke_opencl_initialisation_grid():
# Check that device grid write routine is generated
expected = '''
- subroutine write_grid_buffers(field)
- use fortcl, only: get_cmd_queues
- use iso_c_binding, only: c_intptr_t, c_size_t, c_sizeof
- use clfortran
- use ocl_utils_mod, only: check_status
- type(r2d_field), intent(inout), target :: field
- integer(kind=c_size_t) size_in_bytes
- integer(kind=c_intptr_t), pointer :: cmd_queues(:)
- integer(kind=c_intptr_t) cl_mem
- integer ierr
+ subroutine write_grid_buffers(field)
+ use fortcl, only : get_cmd_queues
+ use iso_c_binding, only : c_intptr_t, c_size_t, c_sizeof
+ use clfortran
+ use ocl_utils_mod, only : check_status
+ type(r2d_field), intent(inout), target :: field
+ integer(kind=c_size_t) :: size_in_bytes
+ integer(kind = c_intptr_t), pointer :: cmd_queues(:)
+ integer(kind=c_intptr_t) :: cl_mem
+ integer :: ierr
- cmd_queues => get_cmd_queues()
- size_in_bytes = int(field%grid%nx * field%grid%ny, 8) * \
+ cmd_queues => get_cmd_queues()
+ size_in_bytes = int(field%grid%nx * field%grid%ny, 8) * \
c_sizeof(field%grid%tmask(1,1))
- cl_mem = transfer(field%grid%tmask_device, cl_mem)
- ierr = clenqueuewritebuffer(cmd_queues(1),cl_mem,cl_true,0_8,\
+ cl_mem = transfer(field%grid%tmask_device, cl_mem)
+ ierr = clenqueuewritebuffer(cmd_queues(1),cl_mem,cl_true,0_8,\
size_in_bytes,c_loc(field%grid%tmask),0,c_null_ptr,c_null_ptr)
- call check_status('clenqueuewritebuffer tmask', ierr)
- size_in_bytes = int(field%grid%nx * field%grid%ny, 8) * \
+ call check_status('clenqueuewritebuffer tmask', ierr)
+ size_in_bytes = int(field%grid%nx * field%grid%ny, 8) * \
c_sizeof(field%grid%area_t(1,1))'''
assert expected in generated_code
for grid_property in check_properties:
- code = (f" cl_mem = transfer(field%grid%{grid_property}_device, "
+ code = (f" cl_mem = transfer(field%grid%{grid_property}_device, "
f"cl_mem)\n"
- f" ierr = clenqueuewritebuffer(cmd_queues(1),cl_mem,"
+ f" ierr = clenqueuewritebuffer(cmd_queues(1),cl_mem,"
f"cl_true,0_8,size_in_bytes,c_loc(field%grid%{grid_property}),"
f"0,c_null_ptr,c_null_ptr)\n"
- f" call check_status('clenqueuewritebuffer "
+ f" call check_status('clenqueuewritebuffer "
f"{grid_property}_device', ierr)\n")
assert code in generated_code
# Check that during the first time set-up the previous routines are called
# for a kernel which contains a grid property access.
expected = '''
- xstart = out_fld%internal%xstart
- xstop = out_fld%internal%xstop
- ystart = out_fld%internal%ystart
- ystop = out_fld%internal%ystop
- ! initialise opencl runtime, kernels and buffers
- if (first_time) then
- call psy_init
- cmd_queues => get_cmd_queues()
- kernel_compute_kernel_code = get_kernel_by_name('compute_kernel_code')
- call initialise_device_buffer(out_fld)
- call initialise_device_buffer(in_out_fld)
- call initialise_device_buffer(in_fld)
- call initialise_device_buffer(dx)
- call initialise_grid_device_buffers(in_fld)
- ! do a set_args now so subsequent writes place the data appropriately
- out_fld_cl_mem = transfer(out_fld%device_ptr, out_fld_cl_mem)
- in_out_fld_cl_mem = transfer(in_out_fld%device_ptr, in_out_fld_cl_mem)
- in_fld_cl_mem = transfer(in_fld%device_ptr, in_fld_cl_mem)
- dx_cl_mem = transfer(dx%device_ptr, dx_cl_mem)
- gphiu_cl_mem = transfer(in_fld%grid%gphiu_device, gphiu_cl_mem)
- call compute_kernel_code_set_args(kernel_compute_kernel_code, \
+ xstart = out_fld%internal%xstart
+ xstop = out_fld%internal%xstop
+ ystart = out_fld%internal%ystart
+ ystop = out_fld%internal%ystop
+
+ ! initialise opencl runtime, kernels and buffers
+ if (first_time) then
+ call psy_init()
+ cmd_queues => get_cmd_queues()
+ kernel_compute_kernel_code = get_kernel_by_name('compute_kernel_code')
+ call initialise_device_buffer(out_fld)
+ call initialise_device_buffer(in_out_fld)
+ call initialise_device_buffer(in_fld)
+ call initialise_device_buffer(dx)
+ call initialise_grid_device_buffers(in_fld)
+
+ ! do a set_args now so subsequent writes place the data appropriately
+ out_fld_cl_mem = transfer(out_fld%device_ptr, out_fld_cl_mem)
+ in_out_fld_cl_mem = transfer(in_out_fld%device_ptr, in_out_fld_cl_mem)
+ in_fld_cl_mem = transfer(in_fld%device_ptr, in_fld_cl_mem)
+ dx_cl_mem = transfer(dx%device_ptr, dx_cl_mem)
+ gphiu_cl_mem = transfer(in_fld%grid%gphiu_device, gphiu_cl_mem)
+ call compute_kernel_code_set_args(kernel_compute_kernel_code, \
out_fld_cl_mem, in_out_fld_cl_mem, in_fld_cl_mem, dx_cl_mem, \
in_fld%grid%dx, gphiu_cl_mem, xstart - 1, xstop - 1, ystart - 1, \
ystop - 1)
- ! write data to the device'''
+
+ ! write data to the device'''
assert expected in generated_code
# The write_to_device() can appear in any order in the following 5 lines
lines = generated_code.split('\n')
- idx = lines.index(' ! write data to the device')
+ idx = lines.index(' ! write data to the device')
candidates = '\n'.join(lines[idx+1:idx+6])
assert "call out_fld%write_to_device" in candidates
assert "call in_out_fld%write_to_device" in candidates
@@ -423,120 +428,120 @@ def test_opencl_routines_initialisation(kernel_outputdir):
# Check that the read_from_device routine has been generated
expected = '''\
- subroutine read_from_device(from, to, startx, starty, nx, ny, blocking)
- use iso_c_binding, only: c_intptr_t, c_ptr, c_size_t, c_sizeof
- use ocl_utils_mod, only: check_status
- use kind_params_mod, only: go_wp
- use clfortran
- use fortcl, only: get_cmd_queues
- type(c_ptr), intent(in) :: from
- real(kind=go_wp), intent(inout), dimension(:, :), target :: to
- integer, intent(in) :: startx
- integer, intent(in) :: starty
- integer, intent(in) :: nx
- integer, intent(in) :: ny
- logical, intent(in) :: blocking
- integer(kind=c_size_t) size_in_bytes
- integer(kind=c_size_t) offset_in_bytes
- integer(kind=c_intptr_t) cl_mem
- integer(kind=c_intptr_t), pointer :: cmd_queues(:)
- integer ierr
- integer i
-
- cl_mem = transfer(from, cl_mem)
- cmd_queues => get_cmd_queues()
- if (nx < size(to, 1) / 2) then
- do i = starty, starty + ny, 1
- size_in_bytes = int(nx, 8) * c_sizeof(to(1,1))
- offset_in_bytes = int(size(to, 1) * (i - 1) + \
+ subroutine read_from_device(from, to, startx, starty, nx, ny, blocking)
+ use iso_c_binding, only : c_intptr_t, c_ptr, c_size_t, c_sizeof
+ use ocl_utils_mod, only : check_status
+ use kind_params_mod, only : go_wp
+ use clfortran
+ use fortcl, only : get_cmd_queues
+ type(c_ptr), intent(in) :: from
+ real(kind = go_wp), intent(inout), dimension(:, :), target :: to
+ integer, intent(in) :: startx
+ integer, intent(in) :: starty
+ integer, intent(in) :: nx
+ integer, intent(in) :: ny
+ logical, intent(in) :: blocking
+ integer(kind=c_size_t) :: size_in_bytes
+ integer(kind=c_size_t) :: offset_in_bytes
+ integer(kind=c_intptr_t) :: cl_mem
+ integer(kind = c_intptr_t), pointer :: cmd_queues(:)
+ integer :: ierr
+ integer :: i
+
+ cl_mem = transfer(from, cl_mem)
+ cmd_queues => get_cmd_queues()
+ if (nx < size(to, 1) / 2) then
+ do i = starty, starty + ny, 1
+ size_in_bytes = int(nx, 8) * c_sizeof(to(1,1))
+ offset_in_bytes = int(size(to, 1) * (i - 1) + \
(startx - 1)) * c_sizeof(to(1,1))
- ierr = clenqueuereadbuffer(cmd_queues(1),cl_mem,cl_false,\
+ ierr = clenqueuereadbuffer(cmd_queues(1),cl_mem,cl_false,\
offset_in_bytes,size_in_bytes,c_loc(to(startx,i)),0,c_null_ptr,c_null_ptr)
- call check_status('clenqueuereadbuffer', ierr)
- end do
- if (blocking) then
- call check_status('clfinish on read', clfinish(cmd_queues(1)))
- end if
- else
- size_in_bytes = int(size(to, 1) * ny, 8) * c_sizeof(to(1,1))
- offset_in_bytes = int(size(to, 1) * (starty - 1), 8) * \
-c_sizeof(to(1,1))
- ierr = clenqueuereadbuffer(cmd_queues(1),cl_mem,cl_true,\
-offset_in_bytes,size_in_bytes,c_loc(to(1,starty)),0,c_null_ptr,c_null_ptr)
call check_status('clenqueuereadbuffer', ierr)
+ enddo
+ if (blocking) then
+ call check_status('clfinish on read', clfinish(cmd_queues(1)))
end if
+ else
+ size_in_bytes = int(size(to, 1) * ny, 8) * c_sizeof(to(1,1))
+ offset_in_bytes = int(size(to, 1) * (starty - 1), 8) * \
+c_sizeof(to(1,1))
+ ierr = clenqueuereadbuffer(cmd_queues(1),cl_mem,cl_true,\
+offset_in_bytes,size_in_bytes,c_loc(to(1,starty)),0,c_null_ptr,c_null_ptr)
+ call check_status('clenqueuereadbuffer', ierr)
+ end if
- end subroutine read_from_device'''
+ end subroutine read_from_device'''
assert expected in generated_code
# Check that the write_to_device routine has been generated
expected = '''\
- subroutine write_to_device(from, to, startx, starty, nx, ny, blocking)
- use iso_c_binding, only: c_intptr_t, c_ptr, c_size_t, c_sizeof
- use ocl_utils_mod, only: check_status
- use kind_params_mod, only: go_wp
- use clfortran
- use fortcl, only: get_cmd_queues
- real(kind=go_wp), intent(in), dimension(:, :), target :: from
- type(c_ptr), intent(in) :: to
- integer, intent(in) :: startx
- integer, intent(in) :: starty
- integer, intent(in) :: nx
- integer, intent(in) :: ny
- logical, intent(in) :: blocking
- integer(kind=c_intptr_t) cl_mem
- integer(kind=c_size_t) size_in_bytes
- integer(kind=c_size_t) offset_in_bytes
- integer(kind=c_intptr_t), pointer :: cmd_queues(:)
- integer ierr
- integer i
-
- cl_mem = transfer(to, cl_mem)
- cmd_queues => get_cmd_queues()
- if (nx < size(from, 1) / 2) then
- do i = starty, starty + ny, 1
- size_in_bytes = int(nx, 8) * c_sizeof(from(1,1))
- offset_in_bytes = int(size(from, 1) * (i - 1) + (startx - 1)) * \
+ subroutine write_to_device(from, to, startx, starty, nx, ny, blocking)
+ use iso_c_binding, only : c_intptr_t, c_ptr, c_size_t, c_sizeof
+ use ocl_utils_mod, only : check_status
+ use kind_params_mod, only : go_wp
+ use clfortran
+ use fortcl, only : get_cmd_queues
+ real(kind = go_wp), intent(in), dimension(:, :), target :: from
+ type(c_ptr), intent(in) :: to
+ integer, intent(in) :: startx
+ integer, intent(in) :: starty
+ integer, intent(in) :: nx
+ integer, intent(in) :: ny
+ logical, intent(in) :: blocking
+ integer(kind=c_intptr_t) :: cl_mem
+ integer(kind=c_size_t) :: size_in_bytes
+ integer(kind=c_size_t) :: offset_in_bytes
+ integer(kind = c_intptr_t), pointer :: cmd_queues(:)
+ integer :: ierr
+ integer :: i
+
+ cl_mem = transfer(to, cl_mem)
+ cmd_queues => get_cmd_queues()
+ if (nx < size(from, 1) / 2) then
+ do i = starty, starty + ny, 1
+ size_in_bytes = int(nx, 8) * c_sizeof(from(1,1))
+ offset_in_bytes = int(size(from, 1) * (i - 1) + (startx - 1)) * \
c_sizeof(from(1,1))
- ierr = clenqueuewritebuffer(cmd_queues(1),cl_mem,cl_false,\
+ ierr = clenqueuewritebuffer(cmd_queues(1),cl_mem,cl_false,\
offset_in_bytes,size_in_bytes,c_loc(from(startx,i)),0,c_null_ptr,c_null_ptr)
- call check_status('clenqueuewritebuffer', ierr)
- end do
- if (blocking) then
- call check_status('clfinish on write', clfinish(cmd_queues(1)))
- end if
- else
- size_in_bytes = int(size(from, 1) * ny, 8) * c_sizeof(from(1,1))
- offset_in_bytes = int(size(from, 1) * (starty - 1)) * \
-c_sizeof(from(1,1))
- ierr = clenqueuewritebuffer(cmd_queues(1),cl_mem,cl_true,\
-offset_in_bytes,size_in_bytes,c_loc(from(1,starty)),0,c_null_ptr,c_null_ptr)
call check_status('clenqueuewritebuffer', ierr)
+ enddo
+ if (blocking) then
+ call check_status('clfinish on write', clfinish(cmd_queues(1)))
end if
+ else
+ size_in_bytes = int(size(from, 1) * ny, 8) * c_sizeof(from(1,1))
+ offset_in_bytes = int(size(from, 1) * (starty - 1)) * \
+c_sizeof(from(1,1))
+ ierr = clenqueuewritebuffer(cmd_queues(1),cl_mem,cl_true,\
+offset_in_bytes,size_in_bytes,c_loc(from(1,starty)),0,c_null_ptr,c_null_ptr)
+ call check_status('clenqueuewritebuffer', ierr)
+ end if
- end subroutine write_to_device'''
+ end subroutine write_to_device'''
assert expected in generated_code
# Check that the device buffer initialisation routine has been generated
expected = '''\
- subroutine initialise_device_buffer(field)
- use fortcl, only: create_rw_buffer
- use iso_c_binding, only: c_size_t
- use field_mod
- type(r2d_field), intent(inout), target :: field
- integer(kind=c_size_t) size_in_bytes
-
- if (.not.field%data_on_device) then
- size_in_bytes = int(field%grid%nx * field%grid%ny, 8) * \
+ subroutine initialise_device_buffer(field)
+ use fortcl, only : create_rw_buffer
+ use iso_c_binding, only : c_size_t
+ use field_mod
+ type(r2d_field), intent(inout), target :: field
+ integer(kind=c_size_t) :: size_in_bytes
+
+ if (.not.field%data_on_device) then
+ size_in_bytes = int(field%grid%nx * field%grid%ny, 8) * \
c_sizeof(field%data(1,1))
- field%device_ptr = transfer(create_rw_buffer(size_in_bytes), \
+ field%device_ptr = transfer(create_rw_buffer(size_in_bytes), \
field%device_ptr)
- field%data_on_device = .true.
- field%read_from_device_f => read_from_device
- field%write_to_device_f => write_to_device
- end if
+ field%data_on_device = .true.
+ field%read_from_device_f => read_from_device
+ field%write_to_device_f => write_to_device
+ end if
- end subroutine initialise_device_buffer'''
+ end subroutine initialise_device_buffer'''
assert expected in generated_code
assert GOceanOpenCLBuild(kernel_outputdir).code_compiles(psy)
@@ -556,20 +561,20 @@ def test_psy_init_defaults(kernel_outputdir):
otrans.apply(sched)
generated_code = str(psy.gen)
expected = '''
- subroutine psy_init()
- use fortcl, only: add_kernels, ocl_env_init
- character(len=30) kernel_names(1)
- integer, save :: ocl_device_num = 1
- logical, save :: initialised = .false.
-
- if (.not.initialised) then
- initialised = .true.
- call ocl_env_init(1, ocl_device_num, .false., .false.)
- kernel_names(1) = 'compute_cu_code'
- call add_kernels(1, kernel_names)
- end if
-
- end subroutine psy_init'''
+ subroutine psy_init()
+ use fortcl, only : add_kernels, ocl_env_init
+ character(len = 30) :: kernel_names(1)
+ integer, save :: ocl_device_num = 1
+ logical, save :: initialised = .false.
+
+ if (.not.initialised) then
+ initialised = .true.
+ call ocl_env_init(1, ocl_device_num, .false., .false.)
+ kernel_names(1) = 'compute_cu_code'
+ call add_kernels(1, kernel_names)
+ end if
+
+ end subroutine psy_init'''
assert expected in generated_code.lower()
assert GOceanOpenCLBuild(kernel_outputdir).code_compiles(psy)
@@ -595,7 +600,7 @@ def test_psy_init_multiple_kernels(kernel_outputdir):
generated_code = str(psy.gen)
# Check that the kernel_names has enough space for all kernels
- assert "CHARACTER(LEN=30) kernel_names(2)" in generated_code
+ assert "CHARACTER(LEN = 30) :: kernel_names(2)" in generated_code
# The order doesn't matter as far as the two kernels are loaded
assert ("kernel_names(1) = 'kernel_with_use_code'" in generated_code or
@@ -606,7 +611,7 @@ def test_psy_init_multiple_kernels(kernel_outputdir):
assert "kernel_names(3)" not in generated_code
# Check that add_kernels is provided with the total number of kernels
- assert "CALL add_kernels(2, kernel_names)" in generated_code
+ assert "call add_kernels(2, kernel_names)" in generated_code
assert GOceanOpenCLBuild(kernel_outputdir).code_compiles(
psy, dependencies=["model_mod.f90"])
@@ -633,22 +638,22 @@ def test_psy_init_multiple_devices_per_node(kernel_outputdir, monkeypatch):
generated_code = str(psy.gen)
expected = '''
- subroutine psy_init()
- use parallel_mod, only: get_rank
- use fortcl, only: add_kernels, ocl_env_init
- character(len=30) kernel_names(1)
- integer, save :: ocl_device_num = 1
- logical, save :: initialised = .false.
-
- if (.not.initialised) then
- initialised = .true.
- ocl_device_num = mod(get_rank() - 1, 2) + 1
- call ocl_env_init(1, ocl_device_num, .false., .false.)
- kernel_names(1) = 'compute_cu_code'
- call add_kernels(1, kernel_names)
- end if
-
- end subroutine psy_init'''
+ subroutine psy_init()
+ use parallel_mod, only : get_rank
+ use fortcl, only : add_kernels, ocl_env_init
+ character(len = 30) :: kernel_names(1)
+ integer, save :: ocl_device_num = 1
+ logical, save :: initialised = .false.
+
+ if (.not.initialised) then
+ initialised = .true.
+ ocl_device_num = mod(get_rank() - 1, 2) + 1
+ call ocl_env_init(1, ocl_device_num, .false., .false.)
+ kernel_names(1) = 'compute_cu_code'
+ call add_kernels(1, kernel_names)
+ end if
+
+ end subroutine psy_init'''
assert expected in generated_code.lower()
assert GOceanOpenCLBuild(kernel_outputdir).code_compiles(psy)
@@ -670,7 +675,7 @@ def test_psy_init_with_options(kernel_outputdir):
otrans.apply(sched, options={"enable_profiling": True,
"out_of_order": True})
generated_code = str(psy.gen)
- assert "CALL ocl_env_init(5, ocl_device_num, .true., .true.)\n" \
+ assert "call ocl_env_init(5, ocl_device_num, .true., .true.)\n" \
in generated_code
assert GOceanOpenCLBuild(kernel_outputdir).code_compiles(psy)
@@ -695,42 +700,43 @@ def test_invoke_opencl_kernel_call(kernel_outputdir, monkeypatch, debug_mode):
# Set up globalsize and localsize values
expected = '''
- globalsize = (/p_fld%grid%nx, p_fld%grid%ny/)
- localsize = (/64, 1/)'''
+ globalsize = (/p_fld%grid%nx, p_fld%grid%ny/)
+ localsize = (/64, 1/)'''
if debug_mode:
# Check that the globalsize first dimension is a multiple of
# the localsize first dimension
expected += '''
- IF (MOD(p_fld%grid%nx, 64) /= 0) THEN
- CALL check_status('Global size is not a multiple of local size \
+ if (MOD(p_fld%grid%nx, 64) /= 0) then
+ call check_status('Global size is not a multiple of local size \
(mandatory in OpenCL < 2.0).', -1)
- END IF'''
+ end if'''
if debug_mode:
# Check that there is no pending error in the queue before launching
# the kernel
expected += '''
- ierr = clFinish(cmd_queues(1))
- CALL check_status('Errors before compute_cu_code launch', ierr)'''
+ ierr = clFinish(cmd_queues(1))
+ call check_status('Errors before compute_cu_code launch', ierr)'''
# Cast dl_esm_inf pointers to cl_mem handlers
expected += '''
- cu_fld_cl_mem = TRANSFER(cu_fld%device_ptr, cu_fld_cl_mem)
- p_fld_cl_mem = TRANSFER(p_fld%device_ptr, p_fld_cl_mem)
- u_fld_cl_mem = TRANSFER(u_fld%device_ptr, u_fld_cl_mem)'''
+ cu_fld_cl_mem = TRANSFER(cu_fld%device_ptr, cu_fld_cl_mem)
+ p_fld_cl_mem = TRANSFER(p_fld%device_ptr, p_fld_cl_mem)
+ u_fld_cl_mem = TRANSFER(u_fld%device_ptr, u_fld_cl_mem)'''
# Call the set_args subroutine with the boundaries corrected for the
# OpenCL 0-indexing
expected += '''
- CALL compute_cu_code_set_args(kernel_compute_cu_code, \
+ call compute_cu_code_set_args(kernel_compute_cu_code, \
cu_fld_cl_mem, p_fld_cl_mem, u_fld_cl_mem, \
xstart - 1, xstop - 1, \
-ystart - 1, ystop - 1)'''
+ystart - 1, ystop - 1)
+'''
expected += '''
- ! Launch the kernel
- ierr = clEnqueueNDRangeKernel(cmd_queues(1), kernel_compute_cu_code, \
+ ! Launch the kernel
+ ierr = clEnqueueNDRangeKernel(cmd_queues(1), kernel_compute_cu_code, \
2, C_NULL_PTR, C_LOC(globalsize), C_LOC(localsize), 0, C_NULL_PTR, \
C_NULL_PTR)'''
@@ -738,9 +744,9 @@ def test_invoke_opencl_kernel_call(kernel_outputdir, monkeypatch, debug_mode):
# Check that there are no errors during the kernel launch or during
# the execution of the kernel.
expected += '''
- CALL check_status('compute_cu_code clEnqueueNDRangeKernel', ierr)
- ierr = clFinish(cmd_queues(1))
- CALL check_status('Errors during compute_cu_code', ierr)'''
+ call check_status('compute_cu_code clEnqueueNDRangeKernel', ierr)
+ ierr = clFinish(cmd_queues(1))
+ call check_status('Errors during compute_cu_code', ierr)'''
assert expected in generated_code
assert GOceanOpenCLBuild(kernel_outputdir).code_compiles(psy)
@@ -910,8 +916,8 @@ def test_opencl_options_effects():
assert "ierr = clEnqueueNDRangeKernel(cmd_queues(2), " \
"kernel_compute_cu_code, 2, C_NULL_PTR, C_LOC(globalsize), " \
"C_LOC(localsize), 0, C_NULL_PTR, C_NULL_PTR)" in generated_code
- assert " ierr = clFinish(cmd_queues(1))\n" \
- " ierr = clFinish(cmd_queues(2))\n" in generated_code
+ assert " ierr = clFinish(cmd_queues(1))\n" \
+ " ierr = clFinish(cmd_queues(2))\n" in generated_code
assert "ierr = clFinish(cmd_queues(3))" not in generated_code
# Reparse the example as changes are not possible after a psy.gen
@@ -957,12 +963,12 @@ def test_multiple_command_queues(dist_mem):
generated_code = str(psy.gen)
kernelbarrier = '''
- ierr = clFinish(cmd_queues(2))
- p_fld_cl_mem = TRANSFER(p_fld%device_ptr, p_fld_cl_mem)'''
+ ierr = clFinish(cmd_queues(2))
+ p_fld_cl_mem = TRANSFER(p_fld%device_ptr, p_fld_cl_mem)'''
haloexbarrier = '''
- ierr = clFinish(cmd_queues(2))
- CALL cu_fld%halo_exchange(1)'''
+ ierr = clFinish(cmd_queues(2))
+ call cu_fld%halo_exchange(1)'''
if dist_mem:
# In distributed memory the command_queue synchronisation happens
@@ -990,52 +996,52 @@ def test_set_kern_args(kernel_outputdir):
generated_code = str(psy.gen)
# Check we've only generated one set-args routine with arguments:
# kernel object + kernel arguments + boundary values
- assert generated_code.count("SUBROUTINE compute_cu_code_set_args("
+ assert generated_code.count("subroutine compute_cu_code_set_args("
"kernel_obj, cu_fld, p_fld, u_fld, xstart, "
"xstop, ystart, ystop)") == 1
# Declarations
expected = '''\
- USE clfortran, ONLY: clSetKernelArg
- USE iso_c_binding, ONLY: C_LOC, C_SIZEOF, c_intptr_t
- USE ocl_utils_mod, ONLY: check_status
- INTEGER(KIND=c_intptr_t), TARGET :: kernel_obj
- INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: cu_fld
- INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: p_fld
- INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: u_fld
- INTEGER, INTENT(IN), TARGET :: xstart
- INTEGER, INTENT(IN), TARGET :: xstop
- INTEGER, INTENT(IN), TARGET :: ystart
- INTEGER, INTENT(IN), TARGET :: ystop
- INTEGER ierr'''
+ use clfortran, only : clSetKernelArg
+ use iso_c_binding, only : C_LOC, C_SIZEOF, c_intptr_t
+ use ocl_utils_mod, only : check_status
+ INTEGER(KIND=c_intptr_t), TARGET :: kernel_obj
+ INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: cu_fld
+ INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: p_fld
+ INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: u_fld
+ INTEGER, INTENT(IN), TARGET :: xstart
+ INTEGER, INTENT(IN), TARGET :: xstop
+ INTEGER, INTENT(IN), TARGET :: ystart
+ INTEGER, INTENT(IN), TARGET :: ystop
+ integer :: ierr'''
assert expected in generated_code
expected = '''\
- ierr = clSetKernelArg(kernel_obj, 0, C_SIZEOF(cu_fld), C_LOC(cu_fld))
- CALL check_status('clSetKernelArg: arg 0 of compute_cu_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 1, C_SIZEOF(p_fld), C_LOC(p_fld))
- CALL check_status('clSetKernelArg: arg 1 of compute_cu_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 2, C_SIZEOF(u_fld), C_LOC(u_fld))
- CALL check_status('clSetKernelArg: arg 2 of compute_cu_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 3, C_SIZEOF(xstart), C_LOC(xstart))
- CALL check_status('clSetKernelArg: arg 3 of compute_cu_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 4, C_SIZEOF(xstop), C_LOC(xstop))
- CALL check_status('clSetKernelArg: arg 4 of compute_cu_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 5, C_SIZEOF(ystart), C_LOC(ystart))
- CALL check_status('clSetKernelArg: arg 5 of compute_cu_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 6, C_SIZEOF(ystop), C_LOC(ystop))
- CALL check_status('clSetKernelArg: arg 6 of compute_cu_code', ierr)
-
- END SUBROUTINE compute_cu_code_set_args'''
+ ierr = clSetKernelArg(kernel_obj, 0, C_SIZEOF(cu_fld), C_LOC(cu_fld))
+ call check_status('clSetKernelArg: arg 0 of compute_cu_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 1, C_SIZEOF(p_fld), C_LOC(p_fld))
+ call check_status('clSetKernelArg: arg 1 of compute_cu_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 2, C_SIZEOF(u_fld), C_LOC(u_fld))
+ call check_status('clSetKernelArg: arg 2 of compute_cu_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 3, C_SIZEOF(xstart), C_LOC(xstart))
+ call check_status('clSetKernelArg: arg 3 of compute_cu_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 4, C_SIZEOF(xstop), C_LOC(xstop))
+ call check_status('clSetKernelArg: arg 4 of compute_cu_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 5, C_SIZEOF(ystart), C_LOC(ystart))
+ call check_status('clSetKernelArg: arg 5 of compute_cu_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 6, C_SIZEOF(ystop), C_LOC(ystop))
+ call check_status('clSetKernelArg: arg 6 of compute_cu_code', ierr)
+
+ end subroutine compute_cu_code_set_args'''
assert expected in generated_code
# The call to the set_args matches the expected kernel signature with
# the boundary values converted to 0-indexing
- assert ("CALL compute_cu_code_set_args(kernel_compute_cu_code, "
+ assert ("call compute_cu_code_set_args(kernel_compute_cu_code, "
"cu_fld_cl_mem, p_fld_cl_mem, u_fld_cl_mem, "
"xstart - 1, xstop - 1, "
"ystart - 1, ystop - 1)" in generated_code)
# There is also only one version of the set_args for the second kernel
- assert generated_code.count("SUBROUTINE time_smooth_code_set_args("
+ assert generated_code.count("subroutine time_smooth_code_set_args("
"kernel_obj, u_fld, unew_fld, uold_fld, "
"xstart_1, xstop_1, ystart_1, ystop_1)") == 1
assert GOceanOpenCLBuild(kernel_outputdir).code_compiles(psy)
@@ -1057,29 +1063,28 @@ def test_set_kern_args_real_grid_property():
otrans.apply(sched)
generated_code = str(psy.gen)
expected = '''\
- SUBROUTINE compute_kernel_code_set_args(kernel_obj, out_fld, in_out_fld, \
+ subroutine compute_kernel_code_set_args(kernel_obj, out_fld, in_out_fld, \
in_fld, dx, dx_1, gphiu, xstart, xstop, ystart, ystop)
- USE clfortran, ONLY: clSetKernelArg
- USE iso_c_binding, ONLY: C_LOC, C_SIZEOF, c_intptr_t
- USE ocl_utils_mod, ONLY: check_status
- INTEGER(KIND=c_intptr_t), TARGET :: kernel_obj
- INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: out_fld
- INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: in_out_fld
- INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: in_fld
- INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: dx
- REAL(KIND=go_wp), INTENT(IN), TARGET :: dx_1
- INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: gphiu
- INTEGER, INTENT(IN), TARGET :: xstart
- INTEGER, INTENT(IN), TARGET :: xstop
- INTEGER, INTENT(IN), TARGET :: ystart
- INTEGER, INTENT(IN), TARGET :: ystop'''
+ use clfortran, only : clSetKernelArg
+ use iso_c_binding, only : C_LOC, C_SIZEOF, c_intptr_t
+ use ocl_utils_mod, only : check_status
+ INTEGER(KIND=c_intptr_t), TARGET :: kernel_obj
+ INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: out_fld
+ INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: in_out_fld
+ INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: in_fld
+ INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: dx
+ REAL(KIND=go_wp), INTENT(IN), TARGET :: dx_1
+ INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: gphiu
+ INTEGER, INTENT(IN), TARGET :: xstart
+ INTEGER, INTENT(IN), TARGET :: xstop
+ INTEGER, INTENT(IN), TARGET :: ystart
+ INTEGER, INTENT(IN), TARGET :: ystop'''
assert expected in generated_code
# TODO 284: Currently this example cannot be compiled because it needs to
# import a module which won't be found on kernel_outputdir
-@pytest.mark.usefixtures("kernel_outputdir")
-def test_set_kern_float_arg():
+def test_set_kern_float_arg(kernel_outputdir):
''' Check that we generate correct code to set a real, scalar kernel
argument. '''
psy, _ = get_invoke("single_invoke_scalar_float_arg.f90", API, idx=0)
@@ -1096,50 +1101,45 @@ def test_set_kern_float_arg():
# This set_args has a name clash on xstop (one is a grid property and the
# other a loop boundary). One of they should appear as 'xstop_1'
expected = '''\
- SUBROUTINE bc_ssh_code_set_args(kernel_obj, a_scalar, ssh_fld, xstop, \
+ subroutine bc_ssh_code_set_args(kernel_obj, a_scalar, ssh_fld, xstop, \
tmask, xstart, xstop_1, ystart, ystop)
- USE clfortran, ONLY: clSetKernelArg
- USE iso_c_binding, ONLY: C_LOC, C_SIZEOF, c_intptr_t
- USE ocl_utils_mod, ONLY: check_status
- INTEGER(KIND=c_intptr_t), TARGET :: kernel_obj
- REAL(KIND=go_wp), INTENT(IN), TARGET :: a_scalar
- INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: ssh_fld
- INTEGER, INTENT(IN), TARGET :: xstop
- INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: tmask
- INTEGER, INTENT(IN), TARGET :: xstart
- INTEGER, INTENT(IN), TARGET :: xstop_1
- INTEGER, INTENT(IN), TARGET :: ystart
- INTEGER, INTENT(IN), TARGET :: ystop
- INTEGER ierr
+ use clfortran, only : clSetKernelArg
+ use iso_c_binding, only : C_LOC, C_SIZEOF, c_intptr_t
+ use ocl_utils_mod, only : check_status
+ INTEGER(KIND=c_intptr_t), TARGET :: kernel_obj
+ REAL(KIND=go_wp), INTENT(IN), TARGET :: a_scalar
+ INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: ssh_fld
+ INTEGER, INTENT(IN), TARGET :: xstop
+ INTEGER(KIND=c_intptr_t), INTENT(IN), TARGET :: tmask
+ INTEGER, INTENT(IN), TARGET :: xstart
+ INTEGER, INTENT(IN), TARGET :: xstop_1
+ INTEGER, INTENT(IN), TARGET :: ystart
+ INTEGER, INTENT(IN), TARGET :: ystop
+ integer :: ierr
'''
assert expected in generated_code
expected = '''\
- ierr = clSetKernelArg(kernel_obj, 0, C_SIZEOF(a_scalar), C_LOC(a_scalar))
- CALL check_status('clSetKernelArg: arg 0 of bc_ssh_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 1, C_SIZEOF(ssh_fld), C_LOC(ssh_fld))
- CALL check_status('clSetKernelArg: arg 1 of bc_ssh_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 2, C_SIZEOF(xstop), C_LOC(xstop))
- CALL check_status('clSetKernelArg: arg 2 of bc_ssh_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 3, C_SIZEOF(tmask), C_LOC(tmask))
- CALL check_status('clSetKernelArg: arg 3 of bc_ssh_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 4, C_SIZEOF(xstart), C_LOC(xstart))
- CALL check_status('clSetKernelArg: arg 4 of bc_ssh_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 5, C_SIZEOF(xstop_1), C_LOC(xstop_1))
- CALL check_status('clSetKernelArg: arg 5 of bc_ssh_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 6, C_SIZEOF(ystart), C_LOC(ystart))
- CALL check_status('clSetKernelArg: arg 6 of bc_ssh_code', ierr)
- ierr = clSetKernelArg(kernel_obj, 7, C_SIZEOF(ystop), C_LOC(ystop))
- CALL check_status('clSetKernelArg: arg 7 of bc_ssh_code', ierr)
-
- END SUBROUTINE bc_ssh_code_set_args'''
+ ierr = clSetKernelArg(kernel_obj, 0, C_SIZEOF(a_scalar), C_LOC(a_scalar))
+ call check_status('clSetKernelArg: arg 0 of bc_ssh_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 1, C_SIZEOF(ssh_fld), C_LOC(ssh_fld))
+ call check_status('clSetKernelArg: arg 1 of bc_ssh_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 2, C_SIZEOF(xstop), C_LOC(xstop))
+ call check_status('clSetKernelArg: arg 2 of bc_ssh_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 3, C_SIZEOF(tmask), C_LOC(tmask))
+ call check_status('clSetKernelArg: arg 3 of bc_ssh_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 4, C_SIZEOF(xstart), C_LOC(xstart))
+ call check_status('clSetKernelArg: arg 4 of bc_ssh_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 5, C_SIZEOF(xstop_1), C_LOC(xstop_1))
+ call check_status('clSetKernelArg: arg 5 of bc_ssh_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 6, C_SIZEOF(ystart), C_LOC(ystart))
+ call check_status('clSetKernelArg: arg 6 of bc_ssh_code', ierr)
+ ierr = clSetKernelArg(kernel_obj, 7, C_SIZEOF(ystop), C_LOC(ystop))
+ call check_status('clSetKernelArg: arg 7 of bc_ssh_code', ierr)
+
+ end subroutine bc_ssh_code_set_args'''
assert expected in generated_code
- # The generated code of this test cannot be compiled due the duplication
- # of the xstop symbol in the argument list. This happens because the first
- # instance of the symbol is not declared in the symbol table. Issue #798
- # should fix this problem. This is not essential for the purpose of this
- # test that just checks that a_scalar argument is generated appropriately
- # assert GOceanOpenCLBuild(kernel_outputdir).code_compiles(psy)
+ assert GOceanOpenCLBuild(kernel_outputdir).code_compiles(psy)
def test_set_arg_const_scalar():
diff --git a/src/psyclone/tests/domain/lfric/arg_ordering_test.py b/src/psyclone/tests/domain/lfric/arg_ordering_test.py
index 63a6c8946b..5739b88ae4 100644
--- a/src/psyclone/tests/domain/lfric/arg_ordering_test.py
+++ b/src/psyclone/tests/domain/lfric/arg_ordering_test.py
@@ -44,14 +44,13 @@
from psyclone.core import AccessType, VariablesAccessInfo, Signature
from psyclone.domain.lfric import (KernCallArgList, KernStubArgList,
LFRicConstants, LFRicKern,
- LFRicKernMetadata, LFRicLoop,
- LFRicSymbolTable)
+ LFRicKernMetadata, LFRicLoop)
from psyclone.domain.lfric.arg_ordering import ArgOrdering
from psyclone.errors import GenerationError, InternalError
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSyFactory
from psyclone.psyir.nodes import ArrayReference, Literal, Reference
-from psyclone.psyir.symbols import INTEGER_TYPE, ScalarType
+from psyclone.psyir.symbols import INTEGER_TYPE
from psyclone.tests.lfric_build import LFRicBuild
from psyclone.tests.utilities import get_ast, get_base_path, get_invoke
@@ -122,43 +121,27 @@ def test_argordering_get_array_reference():
# First test access using an index, e.g. `array(1)`
one = Literal("1", INTEGER_TYPE)
- ref = arg_list.get_array_reference("array1", [one],
- ScalarType.Intrinsic.REAL)
+ ref = arg_list.get_array_reference("array1", [one])
assert isinstance(ref, ArrayReference)
- ref = arg_list.get_array_reference("array2", [":"],
- ScalarType.Intrinsic.INTEGER)
+ ref = arg_list.get_array_reference("array2", [":"])
assert not isinstance(ref, ArrayReference)
# Now test access using ":" only, e.g. `array(:)` -> this should
# be returned just a reference to `array`
- ref = arg_list.get_array_reference("array3", [":", ":"],
- ScalarType.Intrinsic.REAL)
+ ref = arg_list.get_array_reference("array3", [":", ":"])
assert isinstance(ref, Reference)
assert not isinstance(ref, ArrayReference)
- ref = arg_list.get_array_reference("array4", [":", ":"],
- ScalarType.Intrinsic.INTEGER)
+ ref = arg_list.get_array_reference("array4", [":", ":"])
assert isinstance(ref, Reference)
assert not isinstance(ref, ArrayReference)
# Now specify a symbol, but an incorrect array name:
with pytest.raises(InternalError) as err:
arg_list.get_array_reference("wrong-name", [":", ":"],
- ScalarType.Intrinsic.INTEGER,
symbol=ref.symbol)
assert ("Specified symbol 'array4' has a different name than the "
"specified array name 'wrong-name'" in str(err.value))
- with pytest.raises(TypeError) as err:
- arg_list.get_array_reference("does-not-exist", [":"], "invalid")
- assert ("Unsupported data type 'invalid' in find_or_create_array"
- in str(err.value))
-
- with pytest.raises(TypeError) as err:
- arg_list.get_array_reference("array4", [":"],
- ScalarType.Intrinsic.INTEGER)
- assert ("Array 'array4' already exists, but has 2 dimensions, not 1."
- in str(err.value))
-
def test_argordering_extend():
'''
@@ -274,6 +257,7 @@ def test_arg_ordering_generate_cma_kernel(dist_mem, fortran_writer):
psy = PSyFactory(TEST_API,
distributed_memory=dist_mem).create(invoke_info)
schedule = psy.invokes.invoke_list[0].schedule
+ psy.invokes.invoke_list[0].setup_psy_layer_symbols()
kernel = schedule.kernels()[0]
create_arg_list = KernCallArgList(kernel)
@@ -290,12 +274,10 @@ def test_arg_ordering_generate_cma_kernel(dist_mem, fortran_writer):
check_psyir_results(create_arg_list, fortran_writer)
psyir_arglist = create_arg_list.psyir_arglist
- sym_tab = LFRicSymbolTable()
- arr_2d = sym_tab.find_or_create_array("doesnt_matter", 2,
- ScalarType.Intrinsic.INTEGER)
# Check datatype of the cbanded_map parameters are indeed 2d int arrays
for i in [14, 16]:
- assert psyir_arglist[i].datatype == arr_2d.datatype
+ assert "integer" in psyir_arglist[i].datatype.declaration
+ assert "(:,:)" in psyir_arglist[i].datatype.declaration
def test_arg_ordering_mdata_index():
diff --git a/src/psyclone/tests/domain/lfric/dofkern_test.py b/src/psyclone/tests/domain/lfric/dofkern_test.py
index 4e3066c2f4..6d3ef77a8f 100644
--- a/src/psyclone/tests/domain/lfric/dofkern_test.py
+++ b/src/psyclone/tests/domain/lfric/dofkern_test.py
@@ -151,18 +151,18 @@ def test_upper_bounds(monkeypatch, annexed, dist_mem, tmpdir):
# Distributed memory
if annexed and dist_mem:
- expected = (" loop0_start = 1\n"
- " loop0_stop = f1_proxy%vspace%get_last_dof_annexed()"
+ expected = (" loop0_start = 1\n"
+ " loop0_stop = f1_proxy%vspace%get_last_dof_annexed()"
)
elif not annexed and dist_mem:
- expected = (" loop0_start = 1\n"
- " loop0_stop = f1_proxy%vspace%get_last_dof_owned()"
+ expected = (" loop0_start = 1\n"
+ " loop0_stop = f1_proxy%vspace%get_last_dof_owned()"
)
# Shared memory
elif not dist_mem:
- expected = (" loop0_start = 1\n"
- " loop0_stop = undf_w1"
+ expected = (" loop0_start = 1\n"
+ " loop0_stop = undf_w1"
)
assert expected in code
@@ -183,7 +183,7 @@ def test_indexed_field_args(tmpdir):
psy = PSyFactory(TEST_API, distributed_memory=False).create(invoke_info)
code = str(psy.gen)
- expected = ("CALL testkern_dofs_code(f1_data(df), f2_data(df), "
+ expected = ("call testkern_dofs_code(f1_data(df), f2_data(df), "
"f3_data(df), f4_data(df), field_vec_1_data(df), "
"field_vec_2_data(df), field_vec_3_data(df), scalar_arg)")
@@ -257,44 +257,43 @@ def test_multi_invoke_cell_dof_builtin(tmpdir, monkeypatch, annexed, dist_mem):
# generated
# Use statements
- output = (
- " USE testkern_mod, ONLY: testkern_code\n"
- " USE testkern_dofs_mod, ONLY: testkern_dofs_code\n"
- )
+ assert " use testkern_mod, only : testkern_code\n" in code
+ assert " use testkern_dofs_mod, only : testkern_dofs_code\n" in code
if dist_mem:
# Check mesh_mod is added to use statements
- output += (" USE mesh_mod, ONLY: mesh_type\n")
- assert output in code
+ assert " use mesh_mod, only : mesh_type\n" in code
# Consistent declarations
- output = (
- " REAL(KIND=r_def), intent(in) :: scalar_arg, a\n"
- " TYPE(field_type), intent(in) :: f1, f2, f3, f4, "
- "field_vec(3), m1, m2\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) df\n"
- " INTEGER(KIND=i_def) loop2_start, loop2_stop\n"
- " INTEGER(KIND=i_def) loop1_start, loop1_stop\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: field_vec_1_data =>"
- " null(), field_vec_2_data => null(), field_vec_3_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f4_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f3_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- )
- assert output in code
+ assert """
+ type(field_type), intent(in) :: f1
+ type(field_type), intent(in) :: f2
+ type(field_type), intent(in) :: f3
+ type(field_type), intent(in) :: f4
+ type(field_type), dimension(3), intent(in) :: field_vec
+ real(kind=r_def), intent(in) :: scalar_arg
+ real(kind=r_def), intent(in) :: a
+ type(field_type), intent(in) :: m1
+ type(field_type), intent(in) :: m2
+ """ in code
+ assert """
+ real(kind=r_def), pointer, dimension(:) :: f1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f2_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f3_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f4_data => null()
+ real(kind=r_def), pointer, dimension(:) :: field_vec_1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: field_vec_2_data => null()
+ real(kind=r_def), pointer, dimension(:) :: field_vec_3_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m2_data => null()
+ """ in code
# Check that dof kernel is called correctly
output = (
- " DO df = loop0_start, loop0_stop, 1\n"
- " CALL testkern_dofs_code(f1_data(df), f2_data(df), "
+ " do df = loop0_start, loop0_stop, 1\n"
+ " call testkern_dofs_code(f1_data(df), f2_data(df), "
"f3_data(df), f4_data(df), field_vec_1_data(df), "
"field_vec_2_data(df), field_vec_3_data(df), scalar_arg)\n"
- " END DO\n"
+ " enddo\n"
)
assert output in code
@@ -304,69 +303,65 @@ def test_multi_invoke_cell_dof_builtin(tmpdir, monkeypatch, annexed, dist_mem):
if not annexed:
# Check f1 field has halo exchange performed when annexed == False
output = (
- " DO df = loop0_start, loop0_stop, 1\n"
- " CALL testkern_dofs_code(f1_data(df), f2_data(df), "
+ " do df = loop0_start, loop0_stop, 1\n"
+ " call testkern_dofs_code(f1_data(df), f2_data(df), "
"f3_data(df), f4_data(df), field_vec_1_data(df), "
"field_vec_2_data(df), field_vec_3_data(df), scalar_arg)\n"
- " END DO\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
- "above loop\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
+ " enddo\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
+ "above loop(s)\n"
+ " call f1_proxy%set_dirty()\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
)
else:
# Check f1 field is set dirty but no halo exchange is performed
output = (
- " DO df = loop0_start, loop0_stop, 1\n"
- " CALL testkern_dofs_code(f1_data(df), f2_data(df), "
+ " do df = loop0_start, loop0_stop, 1\n"
+ " call testkern_dofs_code(f1_data(df), f2_data(df), "
"f3_data(df), f4_data(df), field_vec_1_data(df), "
"field_vec_2_data(df), field_vec_3_data(df), scalar_arg)\n"
- " END DO\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
- "above loop\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
+ " enddo\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
+ "above loop(s)\n"
+ " call f1_proxy%set_dirty()\n"
)
# This should be present in all distributed memory cases:
# Check halos are set dirty/clean for modified fields in dof
# kernel (above) and happen before the next kernel (cell_column)
common_halo_exchange_code = (
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL testkern_code"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=1)) then\n"
+ " call m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call testkern_code"
)
output += common_halo_exchange_code # Append common
assert output in code
# Check cell-column kern is called correctly
output = (
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL testkern_code(nlayers_f1, a, f1_data, f2_data, m1_data, "
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call testkern_code(nlayers_f1, a, f1_data, f2_data, m1_data, "
"m2_data, ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, "
"map_w2(:,cell), ndf_w3, undf_w3, map_w3(:,cell))\n"
- " END DO\n"
+ " enddo\n"
)
assert output in code
# Check built-in is called correctly
output = (
- " DO df = loop2_start, loop2_stop, 1\n"
- " ! Built-in: inc_aX_plus_Y (real-valued fields)\n"
- " f1_data(df) = 0.5_r_def * f1_data(df) + f2_data(df)\n"
- " END DO\n"
+ " do df = loop2_start, loop2_stop, 1\n"
+ " ! Built-in: inc_aX_plus_Y (real-valued fields)\n"
+ " f1_data(df) = 0.5_r_def * f1_data(df) + f2_data(df)\n"
+ " enddo\n"
)
assert output in code
diff --git a/src/psyclone/tests/domain/lfric/dyn_meshes_test.py b/src/psyclone/tests/domain/lfric/dyn_meshes_test.py
index a65c0f3f5d..5302492cca 100644
--- a/src/psyclone/tests/domain/lfric/dyn_meshes_test.py
+++ b/src/psyclone/tests/domain/lfric/dyn_meshes_test.py
@@ -35,13 +35,12 @@
''' This module uses pytest to test the DynMeshes class. '''
-from __future__ import absolute_import, print_function
import os
from psyclone.dynamo0p3 import DynMeshes
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSyFactory
-from psyclone.psyir.symbols import DataSymbol, DataTypeSymbol
+from psyclone.psyir.symbols import DataSymbol, UnsupportedFortranType
BASE_PATH = os.path.join(
@@ -84,5 +83,5 @@ def test_add_mesh_symbols():
for tag in mesh_names:
sym = sym_table.lookup(tag)
assert isinstance(sym, DataSymbol)
- assert isinstance(sym.datatype, DataTypeSymbol)
- assert sym.datatype.name == "mesh_type"
+ assert isinstance(sym.datatype, UnsupportedFortranType)
+ assert "mesh_type" in sym.datatype.type_text
diff --git a/src/psyclone/tests/domain/lfric/dyn_proxies_test.py b/src/psyclone/tests/domain/lfric/dyn_proxies_test.py
index 947e72237c..a109289c04 100644
--- a/src/psyclone/tests/domain/lfric/dyn_proxies_test.py
+++ b/src/psyclone/tests/domain/lfric/dyn_proxies_test.py
@@ -41,7 +41,6 @@
from psyclone.domain.lfric import LFRicConstants, LFRicKern
from psyclone.dynamo0p3 import DynProxies
from psyclone.errors import InternalError
-from psyclone.f2pygen import ModuleGen, SubroutineGen
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSyFactory
from psyclone.psyir import symbols
@@ -62,16 +61,16 @@ def test_creation():
psy = PSyFactory(TEST_API, distributed_memory=True).create(info)
invoke = psy.invokes.invoke_list[0]
proxies = DynProxies(invoke)
- tags = proxies._symbol_table.get_tags()
+ tags = proxies.symtab.get_tags()
assert "f1:data" in tags
- sym = proxies._symbol_table.lookup_with_tag("f1:data")
+ sym = proxies.symtab.lookup_with_tag("f1:data")
assert isinstance(sym, symbols.DataSymbol)
assert "f2:data" in tags
-def test_invoke_declarations():
+def test_invoke_declarations(fortran_writer):
'''
- Test the _invoke_declarations() method, primarily by checking the
+ Test the invoke_declarations() method, primarily by checking the
generated declarations in output code.
'''
@@ -81,19 +80,18 @@ def test_invoke_declarations():
psy = PSyFactory(TEST_API, distributed_memory=True).create(info)
invoke = psy.invokes.invoke_list[0]
proxies = DynProxies(invoke)
- amod = ModuleGen("test_mod")
- node = SubroutineGen(amod, name="a_sub")
- amod.add(node)
- proxies._invoke_declarations(node)
- code = str(amod.root).lower()
- assert ("real(kind=r_def), pointer, dimension(:) :: f1_1_data => null(), "
- "f1_2_data => null(), f1_3_data => null()" in code)
- assert "type(field_proxy_type) f1_proxy(3)" in code
- assert ("r_def" in
- invoke.invokes.psy.infrastructure_modules["constants_mod"])
+ proxies.invoke_declarations()
+ code = fortran_writer(invoke.schedule)
+ assert ("real(kind=r_def), pointer, dimension(:) :: f1_1_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: f1_2_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: f1_3_data => null()"
+ in code)
+ assert "type(field_proxy_type), dimension(3) :: f1_proxy" in code
-def test_initialise():
+def test_initialise(fortran_writer):
'''
Test the initialise() method.
@@ -104,15 +102,10 @@ def test_initialise():
psy = PSyFactory(TEST_API, distributed_memory=True).create(info)
invoke = psy.invokes.invoke_list[0]
proxies = DynProxies(invoke)
- amod = ModuleGen("test_mod")
- node = SubroutineGen(amod, name="a_sub")
- amod.add(node)
- proxies._invoke_declarations(node)
- proxies.initialise(node)
- code = str(amod.root).lower()
- assert "initialise field and/or operator proxies" in code
- assert ("r_def" in
- invoke.invokes.psy.infrastructure_modules["constants_mod"])
+ proxies.invoke_declarations()
+ proxies.initialise(0)
+ code = fortran_writer(invoke.schedule)
+ assert "! Initialise field and/or operator proxies" in code
assert "my_mapping_proxy = my_mapping%get_proxy()" in code
assert "my_mapping_local_stencil => my_mapping_proxy%local_stencil" in code
@@ -130,10 +123,7 @@ def test_initialise_errors(monkeypatch):
invoke = psy.invokes.invoke_list[0]
kern = invoke.schedule.walk(LFRicKern)[0]
proxies = DynProxies(invoke)
- amod = ModuleGen("test_mod")
- node = SubroutineGen(amod, name="a_sub")
- amod.add(node)
- proxies._invoke_declarations(node)
+ proxies.invoke_declarations()
# Monkeypatch the first kernel argument so that it is of an unrecognised
# type.
monkeypatch.setattr(kern.args[0], "_argument_type", "gh_wrong")
@@ -141,7 +131,7 @@ def test_initialise_errors(monkeypatch):
monkeypatch.setattr(LFRicConstants, "ARG_TYPE_SUFFIX_MAPPING",
{"gh_wrong": "data"})
with pytest.raises(InternalError) as err:
- proxies.initialise(node)
+ proxies.initialise(0)
assert ("Kernel argument 'my_mapping' of type 'gh_wrong' not handled in "
"DynProxies.initialise()" in str(err.value))
@@ -149,7 +139,7 @@ def test_initialise_errors(monkeypatch):
# argument is recognised as an operator.
monkeypatch.setattr(LFRicConstants, "VALID_OPERATOR_NAMES", ["gh_wrong"])
with pytest.raises(InternalError) as err:
- proxies.initialise(node)
+ proxies.initialise(0)
assert ("Kernel argument 'my_mapping' is a recognised operator but its "
"type ('gh_wrong') is not supported by DynProxies.initialise()"
in str(err.value))
diff --git a/src/psyclone/tests/domain/lfric/kern_call_acc_arg_list_test.py b/src/psyclone/tests/domain/lfric/kern_call_acc_arg_list_test.py
index 85bf837235..2c948888d8 100644
--- a/src/psyclone/tests/domain/lfric/kern_call_acc_arg_list_test.py
+++ b/src/psyclone/tests/domain/lfric/kern_call_acc_arg_list_test.py
@@ -180,6 +180,7 @@ def test_lfric_acc_operator():
# Find the first kernel:
kern = invoke.schedule.walk(psyGen.CodedKern)[0]
+ invoke.setup_psy_layer_symbols()
create_acc_arg_list = KernCallAccArgList(kern)
var_accesses = VariablesAccessInfo()
create_acc_arg_list.generate(var_accesses=var_accesses)
diff --git a/src/psyclone/tests/domain/lfric/kern_call_arg_list_test.py b/src/psyclone/tests/domain/lfric/kern_call_arg_list_test.py
index 06edb906da..4f5be1a93c 100644
--- a/src/psyclone/tests/domain/lfric/kern_call_arg_list_test.py
+++ b/src/psyclone/tests/domain/lfric/kern_call_arg_list_test.py
@@ -561,6 +561,7 @@ def test_indirect_dofmap(fortran_writer):
dist_mem=False, idx=0)
schedule = psy.invokes.invoke_list[0].schedule
+ psy.invokes.invoke_list[0].setup_psy_layer_symbols()
create_arg_list = KernCallArgList(schedule.kernels()[0])
create_arg_list.generate()
assert (create_arg_list._arglist == [
@@ -584,9 +585,6 @@ def test_indirect_dofmap(fortran_writer):
assert (psyir_args[i].symbol.datatype ==
LFRicTypes("LFRicIntegerScalarDataType")())
- # Create a dummy LFRic symbol table to simplify creating
- # standard LFRic types:
- dummy_sym_tab = LFRicSymbolTable()
# Test all 1D real arrays:
for i in [2, 3]:
# The datatype of a field reference is of UnsupportedFortranType
@@ -606,16 +604,12 @@ def test_indirect_dofmap(fortran_writer):
assert len(psyir_args[4].datatype.partial_datatype.shape) == 3
# Test all 1D integer arrays:
- int_1d = dummy_sym_tab.find_or_create_array("doesnt_matter1dint", 1,
- ScalarType.Intrinsic.INTEGER)
for i in [15, 19]:
- assert psyir_args[i].datatype == int_1d.datatype
+ assert "(:)" in psyir_args[i].datatype.declaration
# Test all 2D integer arrays:
- int_2d = dummy_sym_tab.find_or_create_array("doesnt_matter2dint", 2,
- ScalarType.Intrinsic.INTEGER)
for i in [14, 18]:
- assert psyir_args[i].symbol.datatype == int_2d.datatype
+ assert "(:,:)" in psyir_args[i].symbol.datatype.declaration
def test_ref_element_handling(fortran_writer):
@@ -669,10 +663,10 @@ def test_ref_element_handling(fortran_writer):
# standard LFRic types:
dummy_sym_tab = LFRicSymbolTable()
# Test all 2D integer arrays:
- int_2d = dummy_sym_tab.find_or_create_array("doesnt_matter2dint", 2,
- ScalarType.Intrinsic.INTEGER)
+ i2d = dummy_sym_tab.find_or_create_array("doesnt_matter2dint", 2,
+ ScalarType.Intrinsic.INTEGER)
for i in [4]:
- assert psyir_args[i].symbol.datatype == int_2d.datatype
+ assert psyir_args[i].symbol.datatype.partial_datatype == i2d.datatype
int_arr_2d = dummy_sym_tab.find_or_create_array("doesnt_matter2dreal", 2,
ScalarType.Intrinsic.REAL)
diff --git a/src/psyclone/tests/domain/lfric/lfric_builtins_test.py b/src/psyclone/tests/domain/lfric/lfric_builtins_test.py
index 9c6bb0f87b..d48f6f98f6 100644
--- a/src/psyclone/tests/domain/lfric/lfric_builtins_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_builtins_test.py
@@ -1953,13 +1953,13 @@ def test_int_to_real_x_precision(tmpdir, kind_name):
code = str(psy.gen)
# Test code generation
- assert f"USE constants_mod, ONLY: {kind_name}, i_def" in code
- assert (f"USE {kind_name}_field_mod, ONLY: {kind_name}_field_type, "
- f"{kind_name}_field_proxy_type") in code
- assert f"TYPE({kind_name}_field_type), intent(in) :: f2" in code
- assert (f"REAL(KIND={kind_name}), pointer, dimension(:) :: "
+ assert "use constants_mod\n" in code
+ assert (f"use {kind_name}_field_mod, only : {kind_name}_field_proxy_type, "
+ f"{kind_name}_field_type") in code
+ assert f"type({kind_name}_field_type), intent(in) :: f2" in code
+ assert (f"real(kind={kind_name}), pointer, dimension(:) :: "
"f2_data => null()") in code
- assert f"TYPE({kind_name}_field_proxy_type) f2_proxy" in code
+ assert f"type({kind_name}_field_proxy_type) :: f2_proxy" in code
assert f"f2_data(df) = REAL(f1_data(df), kind={kind_name})" in code
# Test compilation of generated code
@@ -2010,15 +2010,12 @@ def test_real_to_int_x_precision(monkeypatch, tmpdir, kind_name):
arg = first_invoke.schedule.children[0].loop_body[0].args[0]
# Set 'f2_data' to another 'i_'
sym_kern = table.lookup_with_tag(f"{arg.name}:data")
- monkeypatch.setattr(arg, "_precision", f"{kind_name}")
monkeypatch.setattr(sym_kern.datatype.partial_datatype.precision,
"_name", f"{kind_name}")
# Test limited code generation (no equivalent field type)
code = str(psy.gen)
- assert f"USE constants_mod, ONLY: r_def, {kind_name}" in code
- assert (f"INTEGER(KIND={kind_name}), pointer, dimension(:) :: "
- "f2_data => null()") in code
+ assert "use constants_mod\n" in code
assert f"f2_data(df) = INT(f1_data(df), kind={kind_name})" in code
# Test compilation of generated code
@@ -2081,25 +2078,16 @@ def test_real_to_real_x_lowering(monkeypatch, tmpdir, kind_name):
arg = first_invoke.schedule.children[0].loop_body[0].args[0]
# Set 'f2_data' to another 'r_'
sym_kern = table.lookup_with_tag(f"{arg.name}:data")
- monkeypatch.setattr(arg, "_precision", f"{kind_name}")
monkeypatch.setattr(sym_kern.datatype.partial_datatype.precision,
"_name", f"{kind_name}")
# Test limited code generation (no equivalent field type)
code = str(psy.gen)
- # Due to the reverse alphabetical ordering performed by PSyclone,
- # different cases will arise depending on the substitution
- if kind_name < 'r_def':
- assert f"USE constants_mod, ONLY: r_solver, r_def, {kind_name}" in code
- elif 'r_solver' > kind_name > 'r_def':
- assert f"USE constants_mod, ONLY: r_solver, {kind_name}, r_def" in code
- else:
- assert f"USE constants_mod, ONLY: {kind_name}, r_solver, r_def" in code
+ # Check that the kind constants are imported
+ assert "use constants_mod\n" in code
# Assert correct type is set
- assert (f"REAL(KIND={kind_name}), pointer, dimension(:) :: "
- "f2_data => null()") in code
assert f"f2_data(df) = REAL(f1_data(df), kind={kind_name})" in code
# Test compilation of generated code
diff --git a/src/psyclone/tests/domain/lfric/lfric_cell_halo_kernels_test.py b/src/psyclone/tests/domain/lfric/lfric_cell_halo_kernels_test.py
index e608b2200f..4c2d7d6b43 100644
--- a/src/psyclone/tests/domain/lfric/lfric_cell_halo_kernels_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_cell_halo_kernels_test.py
@@ -107,39 +107,39 @@ def test_psy_gen_halo_kernel(dist_mem, tmpdir, fortran_writer):
single kernel with operates_on=halo_cell_column. '''
psy, _ = get_invoke("1.4_into_halos_invoke.f90", TEST_API,
dist_mem=dist_mem, idx=0)
- gen_code = str(psy.gen).lower()
+ code = str(psy.gen).lower()
# A halo kernel needs to look up the last halo column in the mesh.
# Therefore we require a mesh object.
if dist_mem:
- assert "integer, intent(in) :: hdepth" in gen_code
+ assert "integer(kind=i_def), intent(in) :: hdepth" in code
- assert "type(mesh_type), pointer :: mesh => null()" in gen_code
- assert "mesh => f1_proxy%vspace%get_mesh()" in gen_code
+ assert "type(mesh_type), pointer :: mesh => null()" in code
+ assert "mesh => f1_proxy%vspace%get_mesh()" in code
# Loop must be over halo cells only
- assert "loop0_start = mesh%get_last_edge_cell()+1" in gen_code
+ assert "loop0_start = mesh%get_last_edge_cell() + 1" in code
assert ("loop0_stop = mesh%get_last_halo_cell(hdepth)"
- in gen_code)
+ in code)
- assert (" do cell = loop0_start, loop0_stop, 1\n"
- " call testkern_halo_only_code(nlayers_f1, a, "
+ assert (" do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_halo_only_code(nlayers_f1, a, "
"f1_data, f2_data, m1_data, m2_data, ndf_w1, undf_w1, "
"map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, "
"undf_w3, map_w3(:,cell))"
- in gen_code)
+ in code)
# Check for appropriate set-dirty/clean calls. Outermost halo remains
# dirty because field being updated is on continuous function space.
- assert (" call f1_proxy%set_dirty()\n"
- " call f1_proxy%set_clean(hdepth - 1)" in gen_code)
+ assert (" call f1_proxy%set_dirty()\n"
+ " call f1_proxy%set_clean(hdepth - 1)" in code)
else:
# No distributed memory so no halo region => no halo depths passed
# from Alg layer.
assert (" subroutine invoke_0_testkern_halo_only_type"
- "(a, f1, f2, m1, m2)" in gen_code)
- assert "integer, intent(in) :: hdepth" not in gen_code
+ "(a, f1, f2, m1, m2)" in code)
+ assert "integer, intent(in) :: hdepth" not in code
# Kernel is not called.
- assert "call testkern_halo_only_code( " not in gen_code
+ assert "call testkern_halo_only_code( " not in code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -170,33 +170,30 @@ def test_psy_gen_domain_two_kernel(dist_mem, tmpdir):
'''
psy, _ = get_invoke("1.4.1_into_halos_plus_domain_invoke.f90",
TEST_API, dist_mem=dist_mem, idx=0)
- gen_code = str(psy.gen).lower()
+ code = str(psy.gen).lower()
if dist_mem:
- assert "mesh => f1_proxy%vspace%get_mesh()" in gen_code
+ assert "mesh => f1_proxy%vspace%get_mesh()" in code
- assert "integer(kind=i_def) ncell_2d_no_halos" in gen_code
+ assert "integer(kind=i_def) :: ncell_2d_no_halos" in code
expected = ""
if dist_mem:
expected += (
- " end do\n"
- " !\n"
- " ! set halos dirty/clean for fields modified in the above "
- "loop\n"
- " !\n"
- " call f1_proxy%set_dirty()\n"
- " call f1_proxy%set_clean(hdepth - 1)\n"
- " !\n")
+ " enddo\n"
+ "\n"
+ " ! set halos dirty/clean for fields modified in the above "
+ "loop(s)\n"
+ " call f1_proxy%set_dirty()\n"
+ " call f1_proxy%set_clean(hdepth - 1)\n")
expected += (
- " call testkern_domain_code(nlayers_f1, ncell_2d_no_halos, a, "
+ " call testkern_domain_code(nlayers_f1, ncell_2d_no_halos, a, "
"f1_data, ndf_w3, undf_w3, map_w3)\n")
- assert expected in gen_code
+ assert expected in code
if dist_mem:
- assert (" ! set halos dirty/clean for fields modified in the "
- "above kernel\n"
- " !\n"
- " call f1_proxy%set_dirty()\n" in gen_code)
+ assert (" ! set halos dirty/clean for fields modified in the "
+ "above loop(s)\n"
+ " call f1_proxy%set_dirty()\n" in code)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -210,33 +207,31 @@ def test_psy_gen_halo_kernel_discontinuous_space(dist_mem, tmpdir):
'''
psy, _ = get_invoke("1.4.2_multi_into_halos_invoke.f90",
TEST_API, dist_mem=dist_mem, idx=0)
- gen_code = str(psy.gen).lower()
+ code = str(psy.gen).lower()
if dist_mem:
- assert "integer, intent(in) :: hdepth, other_depth" in gen_code
+ assert "integer(kind=i_def), intent(in) :: hdepth" in code
+ assert "integer(kind=i_def), intent(in) :: other_depth" in code
# The halo-only kernel updates a field on a continuous function space
# and thus leaves the outermost halo cell dirty.
assert '''call testkern_halo_only_code(nlayers_f1, a, f1_data,\
f2_data, m1_data, m2_data, ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, \
map_w2(:,cell), ndf_w3, undf_w3, map_w3(:,cell))
- end do
- !
- ! set halos dirty/clean for fields modified in the above loop
- !
- call f1_proxy%set_dirty()
- call f1_proxy%set_clean(hdepth - 1)''' in gen_code
+ enddo
+
+ ! set halos dirty/clean for fields modified in the above loop(s)
+ call f1_proxy%set_dirty()
+ call f1_proxy%set_clean(hdepth - 1)''' in code
# testkern_code is a 'normal' kernel and thus leaves all halo cells
# dirty.
assert '''call testkern_code(nlayers_f1, a, f1_data, f2_data, m1_data,\
m2_data, ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), \
ndf_w3, undf_w3, map_w3(:,cell))
- end do
- !
- ! set halos dirty/clean for fields modified in the above loop
- !
- call f1_proxy%set_dirty()
- !''' in gen_code
+ enddo
+
+ ! set halos dirty/clean for fields modified in the above loop(s)
+ call f1_proxy%set_dirty()''' in code
# testkern_halo_and_owned_code operates in the halo for a field on a
# discontinuous function space and therefore the halo is left clean to
@@ -244,22 +239,22 @@ def test_psy_gen_halo_kernel_discontinuous_space(dist_mem, tmpdir):
assert '''call testkern_halo_and_owned_code(nlayers_f1, \
a, f1_data, f2_data, m1_data, m2_data, ndf_w3, undf_w3, map_w3(:,cell), \
ndf_w2, undf_w2, map_w2(:,cell))
- end do
- !
- ! set halos dirty/clean for fields modified in the above loop
- !
- call f1_proxy%set_dirty()
- call f1_proxy%set_clean(other_depth)''' in gen_code
+ enddo
+
+ ! set halos dirty/clean for fields modified in the above loop(s)
+ call f1_proxy%set_dirty()
+ call f1_proxy%set_clean(other_depth)''' in code
else:
# No distributed memory.
# => no halo depths to pass from Algorithm layer.
- assert "integer, intent(in) :: hdepth, other_depth" not in gen_code
+ assert "integer(kind=i_def), intent(in) :: hdepth" not in code
+ assert "integer(kind=i_def), intent(in) :: other_depth" not in code
# => no halos so no need to call a kernel which only operates on
# halo cells.
- assert "call testkern_halo_only_code(" not in gen_code
+ assert "call testkern_halo_only_code(" not in code
# However, a kernel that operates on owned *and* halo cells must still
# be called.
- assert "call testkern_halo_and_owned_code(nlayers_f1, a" in gen_code
+ assert "call testkern_halo_and_owned_code(nlayers_f1, a" in code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -272,26 +267,23 @@ def test_psy_gen_halo_kernel_literal_depths(dist_mem, tmpdir):
'''
psy, _ = get_invoke("1.4.3_literal_depth_into_halos_invoke.f90",
TEST_API, dist_mem=dist_mem, idx=0)
- gen_code = str(psy.gen).lower()
+ code = str(psy.gen).lower()
if dist_mem:
# Make sure we aren't attempting to specify literal values in the
# argument list to the PSy-layer routine.
- assert "subroutine invoke_0(a, f1, f2, m1, m2, hdepth)" in gen_code
+ assert "subroutine invoke_0(a, f1, f2, m1, m2, hdepth)" in code
# First kernel operates into the halo to a depth of '2' but updates a
# field on a continuous function space so only the level-1 halo is
# left clean.
assert '''call f1_proxy%set_dirty()
- call f1_proxy%set_clean(1)
- !''' in gen_code
+ call f1_proxy%set_clean(1)''' in code
assert '''call f1_proxy%set_dirty()
- call f1_proxy%set_clean(hdepth)
- !''' in gen_code
+ call f1_proxy%set_clean(hdepth)''' in code
assert '''call f1_proxy%set_dirty()
- call f1_proxy%set_clean(5)
- !''' in gen_code
+ call f1_proxy%set_clean(5)''' in code
else:
- assert "call testkern_halo_only_code(" not in gen_code
- assert "call testkern_halo_and_owned_code(nlayers_f1, a" in gen_code
+ assert "call testkern_halo_only_code(" not in code
+ assert "call testkern_halo_and_owned_code(nlayers_f1, a" in code
assert LFRicBuild(tmpdir).code_compiles(psy)
diff --git a/src/psyclone/tests/domain/lfric/lfric_cell_iterators_test.py b/src/psyclone/tests/domain/lfric/lfric_cell_iterators_test.py
index f6136a959e..a27cec9a27 100644
--- a/src/psyclone/tests/domain/lfric/lfric_cell_iterators_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_cell_iterators_test.py
@@ -40,7 +40,6 @@
from psyclone.domain.lfric import LFRicCellIterators
from psyclone.errors import GenerationError
-from psyclone.f2pygen import ModuleGen, SubroutineGen
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSyFactory
from psyclone.domain.lfric import LFRicKern
@@ -62,15 +61,15 @@ def test_lfriccelliterators_kernel():
sched = invoke.schedule
kern = sched.walk(LFRicKern)[0]
obj = LFRicCellIterators(kern)
- # We should have a single 'nlayers'.
+ # We should have no 'nlayers' (it's up to the stub_declarations to bring
+ # them when needed)
assert isinstance(obj._nlayers_names, dict)
- assert len(obj._nlayers_names.keys()) == 1
- assert "nlayers" in obj._nlayers_names
+ assert len(obj._nlayers_names.keys()) == 0
-def test_lfriccelliterators_kernel_stub_declns():
+def test_lfriccelliterators_kernel_stub_declns(fortran_writer):
'''
- Check that LFRicCellIterators._stub_declarations() creates the correct
+ Check that LFRicCellIterators.stub_declarations() creates the correct
declarations for an LFRicKern.
'''
_, info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
@@ -79,38 +78,26 @@ def test_lfriccelliterators_kernel_stub_declns():
invoke = psy.invokes.invoke_list[0]
sched = invoke.schedule
kern = sched.walk(LFRicKern)[0]
- obj = LFRicCellIterators(kern)
- node = SubroutineGen(ModuleGen("test_mod"), "test")
- obj._stub_declarations(node)
- output1 = str(node.root).lower()
- assert "integer(kind=i_def), intent(in) :: nlayers" in output1
- # Calling the 'initialise' method in the case of an LFRicKern should
- # do nothing.
- obj.initialise(node)
- output2 = str(node.root).lower()
- assert output2 == output1
+ output = fortran_writer(kern.gen_stub)
+ assert "integer(kind=i_def), intent(in) :: nlayers" in output
def test_lfriccelliterators_invoke_codegen():
'''
- Check that _invoke_declarations() creates the right declarations and
+ Check that invoke_declarations() creates the right declarations and
initialisations for an invoke containing more than one kernel.
'''
_, info = parse(
os.path.join(BASE_PATH, "15.1.2_builtin_and_normal_kernel_invoke.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=True).create(info)
- invoke = psy.invokes.invoke_list[0]
- obj = LFRicCellIterators(invoke)
- node = SubroutineGen(ModuleGen("test_mod"), "test_sub")
- obj._invoke_declarations(node)
+ output = str(psy.gen)
# The invoke has three kernels that operate on cell columns (and two
# builtins but they don't need nlayers).
- assert ("integer(kind=i_def) nlayers_f1, nlayers_f3, nlayers_f4"
- in str(node.root).lower())
- obj.initialise(node)
- output = str(node.root).lower()
- assert "! initialise number of layers" in output
+ assert "integer(kind=i_def) :: nlayers_f1" in output
+ assert "integer(kind=i_def) :: nlayers_f3" in output
+ assert "integer(kind=i_def) :: nlayers_f4" in output
+ assert "! Initialise number of layers" in output
assert "nlayers_f1 = f1_proxy%vspace%get_nlayers()" in output
assert "nlayers_f3 = f3_proxy%vspace%get_nlayers()" in output
assert "nlayers_f4 = f4_proxy%vspace%get_nlayers()" in output
diff --git a/src/psyclone/tests/domain/lfric/lfric_config_test.py b/src/psyclone/tests/domain/lfric/lfric_config_test.py
index 5355b1def6..3c4874e481 100644
--- a/src/psyclone/tests/domain/lfric/lfric_config_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_config_test.py
@@ -39,8 +39,6 @@
Module containing tests for LFRic (Dynamo0.3) API configuration handling.
'''
-from __future__ import absolute_import
-
import re
import pytest
diff --git a/src/psyclone/tests/domain/lfric/lfric_dofmaps_test.py b/src/psyclone/tests/domain/lfric/lfric_dofmaps_test.py
index 1d534ca3e7..c02fe1694f 100644
--- a/src/psyclone/tests/domain/lfric/lfric_dofmaps_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_dofmaps_test.py
@@ -42,9 +42,8 @@
import os
import pytest
-from psyclone.domain.lfric import LFRicDofmaps
+from psyclone.domain.lfric.lfric_dofmaps import LFRicDofmaps
from psyclone.errors import GenerationError, InternalError
-from psyclone.f2pygen import ModuleGen
from psyclone.gen_kernel_stub import generate
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSyFactory
@@ -58,7 +57,7 @@
# Error tests
def test_lfricdofmap_stubdecln_err():
'''
- Check that LFRicDofmaps._stub_declarations raises the expected errors
+ Check that LFRicDofmaps.stub_declarations raises the expected errors
if the stored CMA information is invalid.
'''
@@ -66,20 +65,30 @@ def test_lfricdofmap_stubdecln_err():
"20.5_multi_cma_invoke.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=False).create(invoke_info)
- dofmaps = LFRicDofmaps(psy.invokes.invoke_list[0])
- mod = ModuleGen(name="test_module")
- for cma in dofmaps._unique_indirection_maps.values():
+
+ # Test invalid cbanded kernel
+ kernel_cbanded = psy.invokes.invoke_list[0].schedule.kernels()[0]
+ # Need and nlayers symbols because it is looked-up by the LFRicDofmaps
+ kernel_cbanded._stub_symbol_table.find_or_create("nlayers")
+ dofmaps = LFRicDofmaps(kernel_cbanded)
+ for cma in dofmaps._unique_cbanded_maps.values():
cma["direction"] = "not-a-direction"
with pytest.raises(InternalError) as err:
- dofmaps._stub_declarations(mod)
+ dofmaps.stub_declarations()
assert ("Invalid direction ('not-a-direction') found for CMA operator "
- "when collecting indirection dofmaps" in str(err.value))
- for cma in dofmaps._unique_cbanded_maps.values():
+ "when collecting column-banded dofmaps" in str(err.value))
+
+ # Test invalid direction kernel
+ kernel_direction = psy.invokes.invoke_list[0].schedule.kernels()[1]
+ # Need and nlayers symbols because it is looked-up by the LFRicDofmaps
+ kernel_direction._stub_symbol_table.find_or_create("nlayers")
+ dofmaps = LFRicDofmaps(kernel_direction)
+ for cma in dofmaps._unique_indirection_maps.values():
cma["direction"] = "not-a-direction"
with pytest.raises(InternalError) as err:
- dofmaps._stub_declarations(mod)
+ dofmaps.stub_declarations()
assert ("Invalid direction ('not-a-direction') found for CMA operator "
- "when collecting column-banded dofmaps" in str(err.value))
+ "when collecting indirection dofmaps" in str(err.value))
def test_cma_asm_cbanded_dofmap_error():
@@ -152,9 +161,8 @@ def test_cbanded_test_comments():
code = str(psy.gen)
output = (
- " !\n"
- " ! Look-up required column-banded dofmaps\n"
- " !\n"
+ "\n"
+ " ! Look-up required column-banded dofmaps\n"
)
assert output in code
@@ -172,9 +180,8 @@ def test_unique_fs_comments():
code = str(psy.gen)
output = (
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
)
assert output in code
@@ -186,12 +193,12 @@ def test_stub_decl_dofmaps():
'''
- result = generate(os.path.join(BASE_PATH,
- "columnwise_op_asm_kernel_mod.F90"),
- api=TEST_API)
+ stub_text = generate(os.path.join(BASE_PATH,
+ "columnwise_op_asm_kernel_mod.F90"),
+ api=TEST_API)
- assert ("INTEGER(KIND=i_def), intent(in) :: cma_op_2_nrow, cma_op_2_ncol"
- in str(result))
+ assert "integer(kind=i_def), intent(in) :: cma_op_2_ncol" in stub_text
+ assert "integer(kind=i_def), intent(in) :: cma_op_2_nrow" in stub_text
def test_lfricdofmaps_stub_gen():
@@ -200,12 +207,12 @@ def test_lfricdofmaps_stub_gen():
two fields and one CMA operator as arguments.
'''
- result = generate(os.path.join(BASE_PATH,
- "columnwise_op_app_kernel_mod.F90"),
- api=TEST_API)
+ stub_text = generate(os.path.join(BASE_PATH,
+ "columnwise_op_app_kernel_mod.F90"),
+ api=TEST_API)
expected = (
- " SUBROUTINE columnwise_op_app_kernel_code(cell, ncell_2d, "
+ " subroutine columnwise_op_app_kernel_code(cell, ncell_2d, "
"field_1_aspc1_field_1, field_2_aspc2_field_2, cma_op_3, "
"cma_op_3_nrow, cma_op_3_ncol, cma_op_3_bandwidth, cma_op_3_alpha, "
"cma_op_3_beta, cma_op_3_gamma_m, cma_op_3_gamma_p, "
@@ -214,4 +221,4 @@ def test_lfricdofmaps_stub_gen():
"undf_aspc2_field_2, map_aspc2_field_2, "
"cma_indirection_map_aspc2_field_2)\n"
)
- assert expected in str(result)
+ assert expected in stub_text
diff --git a/src/psyclone/tests/domain/lfric/lfric_domain_kernels_test.py b/src/psyclone/tests/domain/lfric/lfric_domain_kernels_test.py
index 3830f43e2a..95f693e03b 100644
--- a/src/psyclone/tests/domain/lfric/lfric_domain_kernels_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_domain_kernels_test.py
@@ -42,7 +42,7 @@
import os
import pytest
from fparser import api as fpapi
-from psyclone.domain.lfric import LFRicKern, LFRicKernMetadata
+from psyclone.domain.lfric import LFRicKernMetadata
from psyclone.parse.algorithm import parse
from psyclone.parse.utils import ParseError
from psyclone.psyGen import PSyFactory
@@ -283,45 +283,22 @@ def test_psy_gen_domain_kernel(dist_mem, tmpdir, fortran_writer):
_, info = parse(os.path.join(BASE_PATH, "25.0_domain.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=dist_mem).create(info)
- gen_code = str(psy.gen).lower()
+ code = str(psy.gen).lower()
# A domain kernel needs the number of columns in the mesh. Therefore
# we require a mesh object.
- assert "type(mesh_type), pointer :: mesh => null()" in gen_code
- assert "mesh => f1_proxy%vspace%get_mesh()" in gen_code
- assert "integer(kind=i_def) ncell_2d_no_halos" in gen_code
- assert "ncell_2d_no_halos = mesh%get_last_edge_cell()" in gen_code
+ assert "type(mesh_type), pointer :: mesh => null()" in code
+ assert "mesh => f1_proxy%vspace%get_mesh()" in code
+ assert "integer(kind=i_def) :: ncell_2d_no_halos" in code
+ assert "ncell_2d_no_halos = mesh%get_last_edge_cell()" in code
# Kernel call should include whole dofmap and not be within a loop
- if dist_mem:
- expected = " ! call kernels and communication routines\n"
- else:
- expected = " ! call our kernels\n"
- assert (expected + " !\n"
- " call testkern_domain_code(nlayers_f1, ncell_2d_no_halos, "
- "b, f1_data, ndf_w3, undf_w3, map_w3)" in gen_code)
+ assert (" call testkern_domain_code(nlayers_f1, ncell_2d_no_halos, "
+ "b, f1_data, ndf_w3, undf_w3, map_w3)" in code)
+ assert "do " not in code
assert LFRicBuild(tmpdir).code_compiles(psy)
- # Also test that the FortranWriter handles domain kernels as expected.
- # ATM we have a `lower_to_language_level method` for LFRicLoop which
- # removes the loop node for a domain kernel entirely and only leaves the
- # body. So we can't call the FortranWriter directly, since it will first
- # lower the tree, which removes the domain kernel.
- # In order to test the actual writer atm, we have to call the
- # `loop_node` directly. But in order for this to work, we need to
- # lower the actual kernel call. Once #1731 is fixed, the temporary
- # `lower_to_language_level` method in LFRicLoop can (likely) be removed,
- # and then we can just call `fortran_writer(schedule)` here.
- schedule = psy.invokes.invoke_list[0].schedule
- # Lower the LFRicKern:
- for kern in schedule.walk(LFRicKern):
- kern.lower_to_language_level()
- # Now call the loop handling method directly.
- out = fortran_writer.loop_node(schedule.children[0])
- assert ("call testkern_domain_code(nlayers_f1, ncell_2d_no_halos, b, "
- "f1_data, ndf_w3, undf_w3, map_w3)" in out)
-
def test_psy_gen_domain_two_kernel(dist_mem, tmpdir):
''' Check the generation of the PSy layer for an invoke consisting of a
@@ -330,30 +307,28 @@ def test_psy_gen_domain_two_kernel(dist_mem, tmpdir):
_, info = parse(os.path.join(BASE_PATH, "25.1_2kern_domain.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=dist_mem).create(info)
- gen_code = str(psy.gen).lower()
+ code = str(psy.gen).lower()
- assert "mesh => f2_proxy%vspace%get_mesh()" in gen_code
- assert "integer(kind=i_def) ncell_2d_no_halos" in gen_code
+ assert "mesh => f2_proxy%vspace%get_mesh()" in code
+ assert "integer(kind=i_def) :: ncell_2d_no_halos" in code
expected = (
- " end do\n")
+ " enddo\n")
if dist_mem:
expected += (
- " !\n"
- " ! set halos dirty/clean for fields modified in the above "
- "loop\n"
- " !\n"
- " call f2_proxy%set_dirty()\n"
- " !\n")
+ "\n"
+ " ! set halos dirty/clean for fields modified in the above "
+ "loop(s)\n"
+ " call f2_proxy%set_dirty()\n")
expected += (
- " call testkern_domain_code(nlayers_f1, ncell_2d_no_halos, b, "
+ " call testkern_domain_code(nlayers_f1, ncell_2d_no_halos, b, "
"f1_data, ndf_w3, undf_w3, map_w3)\n")
- assert expected in gen_code
+ assert expected in code
if dist_mem:
- assert (" ! set halos dirty/clean for fields modified in the "
- "above kernel\n"
- " !\n"
- " call f1_proxy%set_dirty()\n" in gen_code)
+ assert (
+ # " ! set halos dirty/clean for fields modified in the "
+ # "above kernel\n"
+ " call f1_proxy%set_dirty()\n" in code)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -366,59 +341,55 @@ def test_psy_gen_domain_multi_kernel(dist_mem, tmpdir):
_, info = parse(os.path.join(BASE_PATH, "25.2_multikern_domain.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=dist_mem).create(info)
- gen_code = str(psy.gen).lower()
+ code = str(psy.gen).lower()
# Check that we only have one last-edge-cell assignment
- assert gen_code.count("ncell_2d_no_halos = mesh%get_last_edge_cell()") == 1
+ assert code.count("ncell_2d_no_halos = mesh%get_last_edge_cell()") == 1
- expected = (" !\n"
- " call testkern_domain_code(nlayers_f1, "
- "ncell_2d_no_halos, b, f1_data, ndf_w3, undf_w3, map_w3)\n")
+ expected = (
+ " call testkern_domain_code(nlayers_f1, ncell_2d_no_halos, "
+ "b, f1_data, ndf_w3, undf_w3, map_w3)\n")
if dist_mem:
- assert "loop1_stop = mesh%get_last_halo_cell(1)\n" in gen_code
- expected += (" !\n"
- " ! set halos dirty/clean for fields modified in "
- "the above kernel\n"
- " !\n"
- " call f1_proxy%set_dirty()\n"
- " !\n"
- " if (f2_proxy%is_dirty(depth=1)) then\n"
- " call f2_proxy%halo_exchange(depth=1)\n"
- " end if\n"
- " if (f3_proxy%is_dirty(depth=1)) then\n"
- " call f3_proxy%halo_exchange(depth=1)\n"
- " end if\n"
- " if (f4_proxy%is_dirty(depth=1)) then\n"
- " call f4_proxy%halo_exchange(depth=1)\n"
- " end if\n"
- " call f1_proxy%halo_exchange(depth=1)\n")
+ assert "loop1_stop = mesh%get_last_halo_cell(1)\n" in code
+ expected += (
+ "\n"
+ " ! set halos dirty/clean for fields modified in "
+ "the above loop(s)\n"
+ " call f1_proxy%set_dirty()\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f3_proxy%is_dirty(depth=1)) then\n"
+ " call f3_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f4_proxy%is_dirty(depth=1)) then\n"
+ " call f4_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " call f1_proxy%halo_exchange(depth=1)\n")
else:
- assert "loop1_stop = f2_proxy%vspace%get_ncell()\n" in gen_code
- expected += " do cell = loop1_start, loop1_stop, 1\n"
- assert expected in gen_code
+ assert "loop1_stop = f2_proxy%vspace%get_ncell()\n" in code
+ expected += " do cell = loop1_start, loop1_stop, 1\n"
+ assert expected in code
expected = (
- " end do\n")
+ " enddo\n")
if dist_mem:
expected += (
- " !\n"
- " ! set halos dirty/clean for fields modified in the above "
- "loop\n"
- " !\n"
- " call f1_proxy%set_dirty()\n"
- " !\n")
+ "\n"
+ " ! set halos dirty/clean for fields modified in the above "
+ "loop(s)\n"
+ " call f1_proxy%set_dirty()\n")
expected += (
- " call testkern_domain_code(nlayers_f1, ncell_2d_no_halos, c, "
+ " call testkern_domain_code(nlayers_f1, ncell_2d_no_halos, c, "
"f1_data, ndf_w3, undf_w3, map_w3)\n")
- assert expected in gen_code
+ assert expected in code
if dist_mem:
- assert (" ! set halos dirty/clean for fields modified in the "
- "above kernel\n"
- " !\n"
- " call f5_proxy%set_dirty()\n"
- " !\n"
- " !\n"
- " end subroutine invoke_0" in gen_code)
+ assert (
+ " ! set halos dirty/clean for fields modified in the "
+ "above loop(s)\n"
+ " call f5_proxy%set_dirty()\n"
+ "\n"
+ " end subroutine invoke_0" in code)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -431,17 +402,17 @@ def test_domain_plus_cma_kernels(dist_mem, tmpdir):
_, info = parse(os.path.join(BASE_PATH, "25.3_multikern_domain_cma.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=dist_mem).create(info)
- gen_code = str(psy.gen).lower()
-
- assert "type(mesh_type), pointer :: mesh => null()" in gen_code
- assert "integer(kind=i_def) ncell_2d" in gen_code
- assert "integer(kind=i_def) ncell_2d_no_halos" in gen_code
- assert "mesh => f1_proxy%vspace%get_mesh()" in gen_code
- assert "ncell_2d = mesh%get_ncells_2d()" in gen_code
- assert "ncell_2d_no_halos = mesh%get_last_edge_cell()" in gen_code
+ code = str(psy.gen).lower()
+
+ assert "type(mesh_type), pointer :: mesh => null()" in code
+ assert "integer(kind=i_def) :: ncell_2d" in code
+ assert "integer(kind=i_def) :: ncell_2d_no_halos" in code
+ assert "mesh => f1_proxy%vspace%get_mesh()" in code
+ assert "ncell_2d = mesh%get_ncells_2d()" in code
+ assert "ncell_2d_no_halos = mesh%get_last_edge_cell()" in code
assert ("call testkern_domain_code(nlayers_f1, ncell_2d_no_halos, b, "
- "f1_data, ndf_w3, undf_w3, map_w3)" in gen_code)
+ "f1_data, ndf_w3, undf_w3, map_w3)" in code)
assert ("call columnwise_op_asm_kernel_code(cell, nlayers_lma_op1, "
- "ncell_2d, lma_op1_proxy%ncell_3d," in gen_code)
+ "ncell_2d, lma_op1_proxy%ncell_3d," in code)
assert LFRicBuild(tmpdir).code_compiles(psy)
diff --git a/src/psyclone/tests/domain/lfric/lfric_extract_driver_creator_test.py b/src/psyclone/tests/domain/lfric/lfric_extract_driver_creator_test.py
index a8d8a98ff7..109de4d088 100644
--- a/src/psyclone/tests/domain/lfric/lfric_extract_driver_creator_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_extract_driver_creator_test.py
@@ -272,35 +272,37 @@ def test_lfric_driver_simple_test():
with open(filename, "r", encoding='utf-8') as my_file:
driver = my_file.read()
- for line in ["if (ALLOCATED(psydata_filename)) then",
- "call extract_psy_data%OpenReadFileName(psydata_filename)",
- "else",
- "call extract_psy_data%OpenReadModuleRegion('field', 'test')",
- "end if",
- "call extract_psy_data%ReadVariable('a', a)",
- "call extract_psy_data%ReadVariable('loop0_start', "
- "loop0_start)",
- "call extract_psy_data%ReadVariable('loop0_stop', "
- "loop0_stop)",
- "call extract_psy_data%ReadVariable('m1_data', m1_data)",
- "call extract_psy_data%ReadVariable('m2_data', m2_data)",
- "call extract_psy_data%ReadVariable('map_w1', map_w1)",
- "call extract_psy_data%ReadVariable('map_w2', map_w2)",
- "call extract_psy_data%ReadVariable('map_w3', map_w3)",
- "call extract_psy_data%ReadVariable('ndf_w1', ndf_w1)",
- "call extract_psy_data%ReadVariable('ndf_w2', ndf_w2)",
- "call extract_psy_data%ReadVariable('ndf_w3', ndf_w3)",
- "call extract_psy_data%ReadVariable('nlayers_x_ptr_vector', "
- "nlayers_x_ptr_vector)",
- "call extract_psy_data%ReadVariable('"
- "self_vec_type_vector_data', self_vec_type_vector_data)",
- "call extract_psy_data%ReadVariable('undf_w1', undf_w1)",
- "call extract_psy_data%ReadVariable('undf_w2', undf_w2)",
- "call extract_psy_data%ReadVariable('undf_w3', undf_w3)",
- "call extract_psy_data%ReadVariable('x_ptr_vector_data', "
- "x_ptr_vector_data)",
- "call extract_psy_data%ReadVariable('cell_post', cell_post)"]:
- assert line in driver
+ for line in [
+ "if (ALLOCATED(psydata_filename)) then",
+ "call extract_psy_data%OpenReadFileName(psydata_filename)",
+ "else",
+ "call extract_psy_data%OpenReadModuleRegion('field', 'test')",
+ "end if",
+ "call extract_psy_data%ReadVariable('a', a)",
+ "call extract_psy_data%ReadVariable('loop0_start', "
+ "loop0_start)",
+ "call extract_psy_data%ReadVariable('loop0_stop', "
+ "loop0_stop)",
+ "call extract_psy_data%ReadVariable('m1_data', m1_data)",
+ "call extract_psy_data%ReadVariable('m2_data', m2_data)",
+ "call extract_psy_data%ReadVariable('map_w1', map_w1)",
+ "call extract_psy_data%ReadVariable('map_w2', map_w2)",
+ "call extract_psy_data%ReadVariable('map_w3', map_w3)",
+ "call extract_psy_data%ReadVariable('ndf_w1', ndf_w1)",
+ "call extract_psy_data%ReadVariable('ndf_w2', ndf_w2)",
+ "call extract_psy_data%ReadVariable('ndf_w3', ndf_w3)",
+ "call extract_psy_data%ReadVariable('nlayers_x_ptr_vector', "
+ "nlayers_x_ptr_vector)",
+ "call extract_psy_data%ReadVariable('"
+ "self_vec_type_vector_data', self_vec_type_vector_data)",
+ "call extract_psy_data%ReadVariable('undf_w1', undf_w1)",
+ "call extract_psy_data%ReadVariable('undf_w2', undf_w2)",
+ "call extract_psy_data%ReadVariable('undf_w3', undf_w3)",
+ "call extract_psy_data%ReadVariable('x_ptr_vector_data', "
+ "x_ptr_vector_data)",
+ "call extract_psy_data%ReadVariable('cell_post', cell_post)"
+ ]:
+ assert line.lower() in driver.lower(), line
# A read-write/inc variable should not be allocated (since it will
# be allocated as part of reading in its value):
@@ -360,8 +362,8 @@ def test_lfric_driver_field_arrays():
as an individual field. The driver needs to read in each individual
array member into distinct variables.'''
- _, invoke = get_invoke("8_vector_field_2.f90", API,
- dist_mem=False, idx=0)
+ psy, invoke = get_invoke("8_vector_field_2.f90", API,
+ dist_mem=False, idx=0)
extract = LFRicExtractTrans()
@@ -370,7 +372,7 @@ def test_lfric_driver_field_arrays():
"region_name": ("field", "array")})
# The extraction provides the array once, it is the responsibility
# of the extraction library to create the individual fields.
- out = str(invoke.gen())
+ out = psy.gen
assert "ProvideVariable(\"chi\", chi)" in out
filename = "driver-field-array.F90"
@@ -400,20 +402,20 @@ def test_lfric_driver_operator():
'''Test handling of operators, including the structure members
that are implicitly added.'''
- _, invoke = get_invoke("10.7_operator_read.f90", API,
- dist_mem=False, idx=0)
+ psy, invoke = get_invoke("10.7_operator_read.f90", API,
+ dist_mem=False, idx=0)
extract = LFRicExtractTrans()
extract.apply(invoke.schedule.children[0],
options={"create_driver": True,
"region_name": ("operator", "test")})
- out = str(invoke.gen())
+ out = psy.gen
# Check the structure members that are added for operators:
assert ("ProvideVariable(\"mm_w3_local_stencil\", "
"mm_w3_local_stencil)" in out)
assert ("ProvideVariable(\"mm_w3_proxy%ncell_3d\", "
- "mm_w3_proxy%ncell_3d)" in out)
+ "mm_w3_proxy % ncell_3d)" in out)
assert "ProvideVariable(\"coord_post\", coord)" in out
filename = "driver-operator-test.F90"
@@ -491,14 +493,14 @@ def test_lfric_driver_extract_some_kernels_only():
of the kernels in the tree). This test can potentially be removed
when TODO #1731 is done.'''
- _, invoke = get_invoke("4.5.2_multikernel_invokes.f90", API,
- dist_mem=False, idx=0)
+ psy, invoke = get_invoke("4.5.2_multikernel_invokes.f90", API,
+ dist_mem=False, idx=0)
extract = LFRicExtractTrans()
extract.apply(invoke.schedule.children[2],
options={"create_driver": True,
"region_name": ("field", "test")})
- code = str(invoke.gen())
+ code = psy.gen
# We only extract the third loop, which uses the index '2' for
# loop boundaries. So none of the previous loop indices should
@@ -535,14 +537,14 @@ def test_lfric_driver_extract_some_kernels_only():
def test_lfric_driver_field_array_write():
'''Test the handling of arrays of fields which are written.'''
- _, invoke = get_invoke("10.7_operator_read.f90", API,
- dist_mem=False, idx=0)
+ psy, invoke = get_invoke("10.7_operator_read.f90", API,
+ dist_mem=False, idx=0)
extract = LFRicExtractTrans()
extract.apply(invoke.schedule.children[0],
options={"create_driver": True,
"region_name": ("field", "test")})
- code = str(invoke.gen())
+ code = psy.gen
# The variable coord is an output variable, but it still must
# be provided as input field (since a kernel might only write
# some parts of a field - e.g. most kernels won't update halo
@@ -586,14 +588,14 @@ def test_lfric_driver_field_array_inc():
'''Test the handling of arrays of fields which are incremented (i.e.
read and written).'''
- _, invoke = get_invoke("8_vector_field_2.f90", API,
- dist_mem=False, idx=0)
+ psy, invoke = get_invoke("8_vector_field_2.f90", API,
+ dist_mem=False, idx=0)
extract = LFRicExtractTrans()
extract.apply(invoke.schedule.children[0],
options={"create_driver": True,
"region_name": ("field", "test")})
- code = str(invoke.gen())
+ code = psy.gen
assert 'ProvideVariable("chi", chi)' in code
assert 'ProvideVariable("f1_data", f1_data)' in code
assert 'ProvideVariable("chi_post", chi)' in code
@@ -632,32 +634,32 @@ def test_lfric_driver_external_symbols():
external functions that use module variables.
'''
- _, invoke = get_invoke("driver_creation/invoke_kernel_with_imported_"
- "symbols.f90", API, dist_mem=False, idx=0)
+ psy, invoke = get_invoke("driver_creation/invoke_kernel_with_imported_"
+ "symbols.f90", API, dist_mem=False, idx=0)
extract = LFRicExtractTrans()
extract.apply(invoke.schedule.children[0],
options={"create_driver": True,
"region_name": ("import", "test")})
- code = str(invoke.gen())
- assert ('CALL extract_psy_data%PreDeclareVariable("'
+ code = psy.gen
+ assert ('CALL extract_psy_data % PreDeclareVariable("'
'module_var_a_post@module_with_var_mod", module_var_a)' in code)
- assert ('CALL extract_psy_data%ProvideVariable("'
+ assert ('CALL extract_psy_data % ProvideVariable("'
'module_var_a_post@module_with_var_mod", module_var_a)' in code)
# Check that const-size arrays are exported:
expected = [
- 'USE module_with_var_mod, ONLY: const_size_array',
- 'CALL extract_psy_data%PreDeclareVariable("const_size_array@'
+ 'use module_with_var_mod, only : const_size_array',
+ 'CALL extract_psy_data % PreDeclareVariable("const_size_array@'
'module_with_var_mod", const_size_array)',
- 'CALL extract_psy_data%PreDeclareVariable("const_size_array_post@'
+ 'CALL extract_psy_data % PreDeclareVariable("const_size_array_post@'
'module_with_var_mod", const_size_array)',
- 'CALL extract_psy_data%ProvideVariable("const_size_array@'
+ 'CALL extract_psy_data % ProvideVariable("const_size_array@'
'module_with_var_mod", const_size_array)',
- 'CALL extract_psy_data%ProvideVariable("const_size_array_post@'
+ 'CALL extract_psy_data % ProvideVariable("const_size_array_post@'
'module_with_var_mod", const_size_array)']
for line in expected:
- assert line in code
+ assert line in code, line
filename = "driver-import-test.F90"
with open(filename, "r", encoding='utf-8') as my_file:
@@ -683,20 +685,20 @@ def test_lfric_driver_external_symbols_name_clash():
a name clash.
'''
- _, invoke = get_invoke("driver_creation/invoke_kernel_with_imported_"
- "symbols.f90", API, dist_mem=False, idx=1)
+ psy, invoke = get_invoke("driver_creation/invoke_kernel_with_imported_"
+ "symbols.f90", API, dist_mem=False, idx=1)
extract = LFRicExtractTrans()
extract.apply(invoke.schedule.children[0],
options={"create_driver": True,
"region_name": ("import", "test")})
- code = str(invoke.gen())
+ code = psy.gen
# Make sure the imported, clashing symbol 'f1_data' is renamed:
- assert "USE module_with_name_clash_mod, ONLY: f1_data_1=>f1_data" in code
- assert ('CALL extract_psy_data%PreDeclareVariable("f1_data@'
+ assert "use module_with_name_clash_mod, only : f1_data_1=>f1_data" in code
+ assert ('CALL extract_psy_data % PreDeclareVariable("f1_data@'
'module_with_name_clash_mod", f1_data_1)' in code)
- assert ('CALL extract_psy_data%ProvideVariable("f1_data@'
+ assert ('CALL extract_psy_data % ProvideVariable("f1_data@'
'module_with_name_clash_mod", f1_data_1)' in code)
# Even though PSyclone cannot find the variable, it should still be
@@ -729,19 +731,19 @@ def test_lfric_driver_external_symbols_error(capsys):
resulting in external functions and variables that cannot be found.
'''
- _, invoke = get_invoke("driver_creation/invoke_kernel_with_imported_"
- "symbols_error.f90", API, dist_mem=False, idx=0)
+ psy, invoke = get_invoke("driver_creation/invoke_kernel_with_imported_"
+ "symbols_error.f90", API, dist_mem=False, idx=0)
extract = LFRicExtractTrans()
extract.apply(invoke.schedule.children[0],
options={"create_driver": True,
"region_name": ("import", "test")})
- code = str(invoke.gen())
+ code = psy.gen
# Even though PSyclone cannot find the variable, it should still be
# extracted:
- assert ('CALL extract_psy_data%PreDeclareVariable("non_existent_var@'
+ assert ('CALL extract_psy_data % PreDeclareVariable("non_existent_var@'
'module_with_error_mod", non_existent_var' in code)
- assert ('CALL extract_psy_data%ProvideVariable("non_existent_var@'
+ assert ('CALL extract_psy_data % ProvideVariable("non_existent_var@'
'module_with_error_mod", non_existent_var' in code)
filename = "driver-import-test.F90"
@@ -764,7 +766,7 @@ def test_lfric_driver_external_symbols_error(capsys):
# This variable will be ignored (for now, see TODO 2120) so no code will
# be created for it. The string will still be in the created driver (since
# the module is still inlined), but no ReadVariable code should be created:
- assert "call extract_psy_data%ReadVariable('non_existent@" not in driver
+ assert "call extract_psy_data % ReadVariable('non_existent@" not in driver
# Note that this driver cannot be compiled, since one of the inlined
# source files is invalid Fortran.
diff --git a/src/psyclone/tests/domain/lfric/lfric_field_codegen_test.py b/src/psyclone/tests/domain/lfric/lfric_field_codegen_test.py
index 9a3684ab3c..97d21fd5d4 100644
--- a/src/psyclone/tests/domain/lfric/lfric_field_codegen_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_field_codegen_test.py
@@ -65,84 +65,90 @@ def test_field(tmpdir):
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=False).create(invoke_info)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
generated_code = psy.gen
output = (
- " MODULE single_invoke_psy\n"
- " USE constants_mod, ONLY: r_def, i_def\n"
- " USE field_mod, ONLY: field_type, field_proxy_type\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_testkern_type(a, f1, f2, m1, m2)\n"
- " USE testkern_mod, ONLY: testkern_code\n"
- " REAL(KIND=r_def), intent(in) :: a\n"
- " TYPE(field_type), intent(in) :: f1, f2, m1, m2\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, f2_proxy, m1_proxy, m2_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w1(:,:) => null(), "
- "map_w2(:,:) => null(), map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, ndf_w3, "
- "undf_w3\n"
- " !\n"
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " f2_proxy = f2%get_proxy()\n"
- " f2_data => f2_proxy%data\n"
- " m1_proxy = m1%get_proxy()\n"
- " m1_data => m1_proxy%data\n"
- " m2_proxy = m2%get_proxy()\n"
- " m2_data => m2_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
- " undf_w1 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
- " undf_w2 = f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = m2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = f1_proxy%vspace%get_ncell()\n"
- " !\n"
- " ! Call our kernels\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_code(nlayers_f1, a, f1_data, f2_data, "
+ "module single_invoke_psy\n"
+ " use constants_mod\n"
+ " use field_mod, only : field_proxy_type, field_type\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ " subroutine invoke_0_testkern_type(a, f1, f2, m1, m2)\n"
+ " use testkern_mod, only : testkern_code\n"
+ " real(kind=r_def), intent(in) :: a\n"
+ " type(field_type), intent(in) :: f1\n"
+ " type(field_type), intent(in) :: f2\n"
+ " type(field_type), intent(in) :: m1\n"
+ " type(field_type), intent(in) :: m2\n"
+ " integer(kind=i_def) :: cell\n"
+ " integer(kind=i_def) :: loop0_start\n"
+ " integer(kind=i_def) :: loop0_stop\n"
+ " real(kind=r_def), pointer, dimension(:) :: f1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: f2_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m2_data => null()\n"
+ " integer(kind=i_def) :: nlayers_f1\n"
+ " integer(kind=i_def) :: ndf_w1\n"
+ " integer(kind=i_def) :: undf_w1\n"
+ " integer(kind=i_def) :: ndf_w2\n"
+ " integer(kind=i_def) :: undf_w2\n"
+ " integer(kind=i_def) :: ndf_w3\n"
+ " integer(kind=i_def) :: undf_w3\n"
+ " integer(kind=i_def), pointer :: map_w1(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w2(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w3(:,:) => null()\n"
+ " type(field_proxy_type) :: f1_proxy\n"
+ " type(field_proxy_type) :: f2_proxy\n"
+ " type(field_proxy_type) :: m1_proxy\n"
+ " type(field_proxy_type) :: m2_proxy\n"
+ "\n"
+ " ! Initialise field and/or operator proxies\n"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ " f2_proxy = f2%get_proxy()\n"
+ " f2_data => f2_proxy%data\n"
+ " m1_proxy = m1%get_proxy()\n"
+ " m1_data => m1_proxy%data\n"
+ " m2_proxy = m2%get_proxy()\n"
+ " m2_data => m2_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
+ " undf_w1 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2\n"
+ " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
+ " undf_w2 = f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = m2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = f1_proxy%vspace%get_ncell()\n"
+ "\n"
+ " ! Call kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_code(nlayers_f1, a, f1_data, f2_data, "
"m1_data, m2_data, ndf_w1, undf_w1, map_w1(:,cell), "
"ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3, map_w3(:,cell))\n"
- " END DO\n"
- " !\n"
- " END SUBROUTINE invoke_0_testkern_type\n"
- " END MODULE single_invoke_psy")
- assert output in str(generated_code)
+ " enddo\n"
+ "\n"
+ " end subroutine invoke_0_testkern_type\n"
+ "\n"
+ "end module single_invoke_psy\n")
+ assert output == str(generated_code)
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_field_deref(tmpdir, dist_mem):
@@ -158,129 +164,134 @@ def test_field_deref(tmpdir, dist_mem):
distributed_memory=dist_mem).create(invoke_info)
generated_code = str(psy.gen)
output = (
- " SUBROUTINE invoke_0_testkern_type(a, f1, est_f2, m1, "
- "est_m2)\n"
- " USE testkern_mod, ONLY: testkern_code\n")
+ " subroutine invoke_0_testkern_type(a, f1, est_f2, m1, "
+ "est_m2)\n")
assert output in generated_code
+ assert "use testkern_mod, only : testkern_code\n" in generated_code
if dist_mem:
- output = " USE mesh_mod, ONLY: mesh_type\n"
+ output = " use mesh_mod, only : mesh_type\n"
assert output in generated_code
assert LFRicBuild(tmpdir).code_compiles(psy)
output = (
- " REAL(KIND=r_def), intent(in) :: a\n"
- " TYPE(field_type), intent(in) :: f1, est_f2, m1, est_m2\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: est_m2_data => "
+ " real(kind=r_def), intent(in) :: a\n"
+ " type(field_type), intent(in) :: f1\n"
+ " type(field_type), intent(in) :: est_f2\n"
+ " type(field_type), intent(in) :: m1\n"
+ " type(field_type), intent(in) :: est_m2\n"
+ " integer(kind=i_def) :: cell\n"
+ " integer(kind=i_def) :: loop0_start\n"
+ " integer(kind=i_def) :: loop0_stop\n"
+ )
+ assert output in generated_code
+ output = (
+ " real(kind=r_def), pointer, dimension(:) :: f1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: est_f2_data => "
"null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: est_f2_data => "
+ " real(kind=r_def), pointer, dimension(:) :: m1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: est_m2_data => "
"null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, est_f2_proxy, m1_proxy, "
- "est_m2_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w1(:,:) => null(), "
- "map_w2(:,:) => null(), map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, ndf_w3, "
- "undf_w3\n")
+ " integer(kind=i_def) :: nlayers_f1\n"
+ " integer(kind=i_def) :: ndf_w1\n"
+ " integer(kind=i_def) :: undf_w1\n"
+ " integer(kind=i_def) :: ndf_w2\n"
+ " integer(kind=i_def) :: undf_w2\n"
+ " integer(kind=i_def) :: ndf_w3\n"
+ " integer(kind=i_def) :: undf_w3\n"
+ " integer(kind=i_def), pointer :: map_w1(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w2(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w3(:,:) => null()\n"
+ " type(field_proxy_type) :: f1_proxy\n"
+ " type(field_proxy_type) :: est_f2_proxy\n"
+ " type(field_proxy_type) :: m1_proxy\n"
+ " type(field_proxy_type) :: est_m2_proxy\n"
+ )
assert output in generated_code
if dist_mem:
- output = " TYPE(mesh_type), pointer :: mesh => null()\n"
+ output = " type(mesh_type), pointer :: mesh => null()\n"
assert output in generated_code
output = (
- " !\n"
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " est_f2_proxy = est_f2%get_proxy()\n"
- " est_f2_data => est_f2_proxy%data\n"
- " m1_proxy = m1%get_proxy()\n"
- " m1_data => m1_proxy%data\n"
- " est_m2_proxy = est_m2%get_proxy()\n"
- " est_m2_data => est_m2_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n")
+ "\n"
+ " ! Initialise field and/or operator proxies\n"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ " est_f2_proxy = est_f2%get_proxy()\n"
+ " est_f2_data => est_f2_proxy%data\n"
+ " m1_proxy = m1%get_proxy()\n"
+ " m1_data => m1_proxy%data\n"
+ " est_m2_proxy = est_m2%get_proxy()\n"
+ " est_m2_data => est_m2_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n")
assert output in generated_code
if dist_mem:
output = (
- " !\n"
- " ! Create a mesh object\n"
- " !\n"
- " mesh => f1_proxy%vspace%get_mesh()\n"
+ "\n"
+ " ! Create a mesh object\n"
+ " mesh => f1_proxy%vspace%get_mesh()\n"
)
assert output in generated_code
output = (
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => est_f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => est_m2_proxy%vspace%get_whole_dofmap()\n"
- " !\n")
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => est_f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => est_m2_proxy%vspace%get_whole_dofmap()\n"
+ "\n")
assert output in generated_code
output = (
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
- " undf_w1 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = est_f2_proxy%vspace%get_ndf()\n"
- " undf_w2 = est_f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = est_m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = est_m2_proxy%vspace%get_undf()\n"
- " !\n")
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
+ " undf_w1 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2\n"
+ " ndf_w2 = est_f2_proxy%vspace%get_ndf()\n"
+ " undf_w2 = est_f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = est_m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = est_m2_proxy%vspace%get_undf()\n"
+ "\n")
assert output in generated_code
if dist_mem:
assert "loop0_stop = mesh%get_last_halo_cell(1)\n" in generated_code
output = (
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (est_f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL est_f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (est_m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL est_m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " ! Call kernels and communication routines\n"
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (est_f2_proxy%is_dirty(depth=1)) then\n"
+ " call est_f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (est_m2_proxy%is_dirty(depth=1)) then\n"
+ " call est_m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
assert output in generated_code
else:
assert "loop0_stop = f1_proxy%vspace%get_ncell()\n" in generated_code
output = (
- " ! Call our kernels\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " ! Call kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
assert output in generated_code
output = (
- " CALL testkern_code(nlayers_f1, a, f1_data, est_f2_data, "
+ " call testkern_code(nlayers_f1, a, f1_data, est_f2_data, "
"m1_data, est_m2_data, ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, "
"undf_w2, map_w2(:,cell), ndf_w3, undf_w3, map_w3(:,cell))\n"
- " END DO\n")
+ " enddo\n")
assert output in generated_code
if dist_mem:
output = (
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
- "above loop\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !")
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
+ "above loop(s)\n"
+ " call f1_proxy%set_dirty()\n"
+ )
assert output in generated_code
@@ -296,217 +307,250 @@ def test_field_fs(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
generated_code = str(psy.gen)
- output = (
- " MODULE single_invoke_fs_psy\n"
- " USE constants_mod, ONLY: r_def, i_def\n"
- " USE field_mod, ONLY: field_type, field_proxy_type\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_testkern_fs_type(f1, f2, m1, m2, f3, f4, "
- "m3, m4, f5, f6, m5, m6, m7)\n"
- " USE testkern_fs_mod, ONLY: testkern_fs_code\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " TYPE(field_type), intent(in) :: f1, f2, m1, m2, f3, f4, m3, "
- "m4, f5, f6, m5, m6, m7\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m7_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m6_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m5_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f6_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f5_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m4_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m3_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f4_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f3_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, f2_proxy, m1_proxy, "
- "m2_proxy, f3_proxy, f4_proxy, m3_proxy, m4_proxy, f5_proxy, "
- "f6_proxy, m5_proxy, m6_proxy, m7_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_any_w2(:,:) => null(), "
- "map_w0(:,:) => null(), map_w1(:,:) => null(), map_w2(:,:) => "
- "null(), map_w2broken(:,:) => null(), map_w2h(:,:) => null(), "
- "map_w2htrace(:,:) => null(), map_w2trace(:,:) => null(), "
- "map_w2v(:,:) => null(), map_w2vtrace(:,:) => null(), map_w3(:,:) "
- "=> null(), map_wchi(:,:) => null(), map_wtheta(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, ndf_w0, "
- "undf_w0, ndf_w3, undf_w3, ndf_wtheta, undf_wtheta, ndf_w2h, "
- "undf_w2h, ndf_w2v, undf_w2v, ndf_w2broken, undf_w2broken, "
- "ndf_w2trace, undf_w2trace, ndf_w2htrace, undf_w2htrace, "
- "ndf_w2vtrace, undf_w2vtrace, ndf_wchi, undf_wchi, ndf_any_w2, "
- "undf_any_w2\n"
- " INTEGER(KIND=i_def) max_halo_depth_mesh\n"
- " TYPE(mesh_type), pointer :: mesh => null()\n")
+ output = """\
+module single_invoke_fs_psy
+ use constants_mod
+ use field_mod, only : field_proxy_type, field_type
+ implicit none
+ public
+
+ contains
+ subroutine invoke_0_testkern_fs_type(f1, f2, m1, m2, f3, f4, m3, m4, f5, \
+f6, m5, m6, m7)
+ use mesh_mod, only : mesh_type
+ use testkern_fs_mod, only : testkern_fs_code
+ type(field_type), intent(in) :: f1
+ type(field_type), intent(in) :: f2
+ type(field_type), intent(in) :: m1
+ type(field_type), intent(in) :: m2
+ type(field_type), intent(in) :: f3
+ type(field_type), intent(in) :: f4
+ type(field_type), intent(in) :: m3
+ type(field_type), intent(in) :: m4
+ type(field_type), intent(in) :: f5
+ type(field_type), intent(in) :: f6
+ type(field_type), intent(in) :: m5
+ type(field_type), intent(in) :: m6
+ type(field_type), intent(in) :: m7
+ integer(kind=i_def) :: cell
+ integer(kind=i_def) :: loop0_start
+ integer(kind=i_def) :: loop0_stop
+ type(mesh_type), pointer :: mesh => null()
+ integer(kind=i_def) :: max_halo_depth_mesh
+ real(kind=r_def), pointer, dimension(:) :: f1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f2_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m2_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f3_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f4_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m3_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m4_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f5_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f6_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m5_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m6_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m7_data => null()
+ integer(kind=i_def) :: nlayers_f1
+ integer(kind=i_def) :: ndf_w1
+ integer(kind=i_def) :: undf_w1
+ integer(kind=i_def) :: ndf_w2
+ integer(kind=i_def) :: undf_w2
+ integer(kind=i_def) :: ndf_w0
+ integer(kind=i_def) :: undf_w0
+ integer(kind=i_def) :: ndf_w3
+ integer(kind=i_def) :: undf_w3
+ integer(kind=i_def) :: ndf_wtheta
+ integer(kind=i_def) :: undf_wtheta
+ integer(kind=i_def) :: ndf_w2h
+ integer(kind=i_def) :: undf_w2h
+ integer(kind=i_def) :: ndf_w2v
+ integer(kind=i_def) :: undf_w2v
+ integer(kind=i_def) :: ndf_w2broken
+ integer(kind=i_def) :: undf_w2broken
+ integer(kind=i_def) :: ndf_w2trace
+ integer(kind=i_def) :: undf_w2trace
+ integer(kind=i_def) :: ndf_w2htrace
+ integer(kind=i_def) :: undf_w2htrace
+ integer(kind=i_def) :: ndf_w2vtrace
+ integer(kind=i_def) :: undf_w2vtrace
+ integer(kind=i_def) :: ndf_wchi
+ integer(kind=i_def) :: undf_wchi
+ integer(kind=i_def) :: ndf_any_w2
+ integer(kind=i_def) :: undf_any_w2
+ integer(kind=i_def), pointer :: map_any_w2(:,:) => null()
+ integer(kind=i_def), pointer :: map_w0(:,:) => null()
+ integer(kind=i_def), pointer :: map_w1(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2broken(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2h(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2htrace(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2trace(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2v(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2vtrace(:,:) => null()
+ integer(kind=i_def), pointer :: map_w3(:,:) => null()
+ integer(kind=i_def), pointer :: map_wchi(:,:) => null()
+ integer(kind=i_def), pointer :: map_wtheta(:,:) => null()
+ type(field_proxy_type) :: f1_proxy
+ type(field_proxy_type) :: f2_proxy
+ type(field_proxy_type) :: m1_proxy
+ type(field_proxy_type) :: m2_proxy
+ type(field_proxy_type) :: f3_proxy
+ type(field_proxy_type) :: f4_proxy
+ type(field_proxy_type) :: m3_proxy
+ type(field_proxy_type) :: m4_proxy
+ type(field_proxy_type) :: f5_proxy
+ type(field_proxy_type) :: f6_proxy
+ type(field_proxy_type) :: m5_proxy
+ type(field_proxy_type) :: m6_proxy
+ type(field_proxy_type) :: m7_proxy
+"""
assert output in generated_code
output = (
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " f2_proxy = f2%get_proxy()\n"
- " f2_data => f2_proxy%data\n"
- " m1_proxy = m1%get_proxy()\n"
- " m1_data => m1_proxy%data\n"
- " m2_proxy = m2%get_proxy()\n"
- " m2_data => m2_proxy%data\n"
- " f3_proxy = f3%get_proxy()\n"
- " f3_data => f3_proxy%data\n"
- " f4_proxy = f4%get_proxy()\n"
- " f4_data => f4_proxy%data\n"
- " m3_proxy = m3%get_proxy()\n"
- " m3_data => m3_proxy%data\n"
- " m4_proxy = m4%get_proxy()\n"
- " m4_data => m4_proxy%data\n"
- " f5_proxy = f5%get_proxy()\n"
- " f5_data => f5_proxy%data\n"
- " f6_proxy = f6%get_proxy()\n"
- " f6_data => f6_proxy%data\n"
- " m5_proxy = m5%get_proxy()\n"
- " m5_data => m5_proxy%data\n"
- " m6_proxy = m6%get_proxy()\n"
- " m6_data => m6_proxy%data\n"
- " m7_proxy = m7%get_proxy()\n"
- " m7_data => m7_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
- " !\n"
- " ! Create a mesh object\n"
- " !\n"
- " mesh => f1_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh = mesh%get_halo_depth()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w0 => m1_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
- " map_wtheta => f3_proxy%vspace%get_whole_dofmap()\n"
- " map_w2h => f4_proxy%vspace%get_whole_dofmap()\n"
- " map_w2v => m3_proxy%vspace%get_whole_dofmap()\n"
- " map_w2broken => m4_proxy%vspace%get_whole_dofmap()\n"
- " map_w2trace => f5_proxy%vspace%get_whole_dofmap()\n"
- " map_w2htrace => f6_proxy%vspace%get_whole_dofmap()\n"
- " map_w2vtrace => m5_proxy%vspace%get_whole_dofmap()\n"
- " map_wchi => m6_proxy%vspace%get_whole_dofmap()\n"
- " map_any_w2 => m7_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
- " undf_w1 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
- " undf_w2 = f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w0\n"
- " !\n"
- " ndf_w0 = m1_proxy%vspace%get_ndf()\n"
- " undf_w0 = m1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = m2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for wtheta\n"
- " !\n"
- " ndf_wtheta = f3_proxy%vspace%get_ndf()\n"
- " undf_wtheta = f3_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2h\n"
- " !\n"
- " ndf_w2h = f4_proxy%vspace%get_ndf()\n"
- " undf_w2h = f4_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2v\n"
- " !\n"
- " ndf_w2v = m3_proxy%vspace%get_ndf()\n"
- " undf_w2v = m3_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2broken\n"
- " !\n"
- " ndf_w2broken = m4_proxy%vspace%get_ndf()\n"
- " undf_w2broken = m4_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2trace\n"
- " !\n"
- " ndf_w2trace = f5_proxy%vspace%get_ndf()\n"
- " undf_w2trace = f5_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2htrace\n"
- " !\n"
- " ndf_w2htrace = f6_proxy%vspace%get_ndf()\n"
- " undf_w2htrace = f6_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2vtrace\n"
- " !\n"
- " ndf_w2vtrace = m5_proxy%vspace%get_ndf()\n"
- " undf_w2vtrace = m5_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for wchi\n"
- " !\n"
- " ndf_wchi = m6_proxy%vspace%get_ndf()\n"
- " undf_wchi = m6_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for any_w2\n"
- " !\n"
- " ndf_any_w2 = m7_proxy%vspace%get_ndf()\n"
- " undf_any_w2 = m7_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = mesh%get_last_halo_cell(1)\n"
- " !\n"
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f4_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f4_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m3_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m3_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m4_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m4_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f5_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f5_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f6_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f6_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m5_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m5_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m6_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m6_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m7_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m7_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_fs_code(nlayers_f1, f1_data, f2_data, "
+ " ! Initialise field and/or operator proxies\n"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ " f2_proxy = f2%get_proxy()\n"
+ " f2_data => f2_proxy%data\n"
+ " m1_proxy = m1%get_proxy()\n"
+ " m1_data => m1_proxy%data\n"
+ " m2_proxy = m2%get_proxy()\n"
+ " m2_data => m2_proxy%data\n"
+ " f3_proxy = f3%get_proxy()\n"
+ " f3_data => f3_proxy%data\n"
+ " f4_proxy = f4%get_proxy()\n"
+ " f4_data => f4_proxy%data\n"
+ " m3_proxy = m3%get_proxy()\n"
+ " m3_data => m3_proxy%data\n"
+ " m4_proxy = m4%get_proxy()\n"
+ " m4_data => m4_proxy%data\n"
+ " f5_proxy = f5%get_proxy()\n"
+ " f5_data => f5_proxy%data\n"
+ " f6_proxy = f6%get_proxy()\n"
+ " f6_data => f6_proxy%data\n"
+ " m5_proxy = m5%get_proxy()\n"
+ " m5_data => m5_proxy%data\n"
+ " m6_proxy = m6%get_proxy()\n"
+ " m6_data => m6_proxy%data\n"
+ " m7_proxy = m7%get_proxy()\n"
+ " m7_data => m7_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
+ "\n"
+ " ! Create a mesh object\n"
+ " mesh => f1_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh = mesh%get_halo_depth()\n"
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w0 => m1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
+ " map_wtheta => f3_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2h => f4_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2v => m3_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2broken => m4_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2trace => f5_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2htrace => f6_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2vtrace => m5_proxy%vspace%get_whole_dofmap()\n"
+ " map_wchi => m6_proxy%vspace%get_whole_dofmap()\n"
+ " map_any_w2 => m7_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
+ " undf_w1 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2\n"
+ " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
+ " undf_w2 = f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w0\n"
+ " ndf_w0 = m1_proxy%vspace%get_ndf()\n"
+ " undf_w0 = m1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = m2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for wtheta\n"
+ " ndf_wtheta = f3_proxy%vspace%get_ndf()\n"
+ " undf_wtheta = f3_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2h\n"
+ " ndf_w2h = f4_proxy%vspace%get_ndf()\n"
+ " undf_w2h = f4_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2v\n"
+ " ndf_w2v = m3_proxy%vspace%get_ndf()\n"
+ " undf_w2v = m3_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2broken\n"
+ " ndf_w2broken = m4_proxy%vspace%get_ndf()\n"
+ " undf_w2broken = m4_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2trace\n"
+ " ndf_w2trace = f5_proxy%vspace%get_ndf()\n"
+ " undf_w2trace = f5_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2htrace\n"
+ " ndf_w2htrace = f6_proxy%vspace%get_ndf()\n"
+ " undf_w2htrace = f6_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2vtrace\n"
+ " ndf_w2vtrace = m5_proxy%vspace%get_ndf()\n"
+ " undf_w2vtrace = m5_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for wchi\n"
+ " ndf_wchi = m6_proxy%vspace%get_ndf()\n"
+ " undf_wchi = m6_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for any_w2\n"
+ " ndf_any_w2 = m7_proxy%vspace%get_ndf()\n"
+ " undf_any_w2 = m7_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = mesh%get_last_halo_cell(1)\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=1)) then\n"
+ " call m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f4_proxy%is_dirty(depth=1)) then\n"
+ " call f4_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m3_proxy%is_dirty(depth=1)) then\n"
+ " call m3_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m4_proxy%is_dirty(depth=1)) then\n"
+ " call m4_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f5_proxy%is_dirty(depth=1)) then\n"
+ " call f5_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f6_proxy%is_dirty(depth=1)) then\n"
+ " call f6_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m5_proxy%is_dirty(depth=1)) then\n"
+ " call m5_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m6_proxy%is_dirty(depth=1)) then\n"
+ " call m6_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m7_proxy%is_dirty(depth=1)) then\n"
+ " call m7_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_fs_code(nlayers_f1, f1_data, f2_data, "
"m1_data, m2_data, f3_data, f4_data, "
"m3_data, m4_data, f5_data, f6_data, "
"m5_data, m6_data, m7_data, ndf_w1, undf_w1, "
@@ -519,17 +563,17 @@ def test_field_fs(tmpdir):
"map_w2htrace(:,cell), ndf_w2vtrace, undf_w2vtrace, "
"map_w2vtrace(:,cell), ndf_wchi, undf_wchi, map_wchi(:,cell), "
"ndf_any_w2, undf_any_w2, map_any_w2(:,cell))\n"
- " END DO\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the above loop\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " CALL f3_proxy%set_dirty()\n"
- " CALL f3_proxy%set_clean(1)\n"
- " !\n"
- " !\n"
- " END SUBROUTINE invoke_0_testkern_fs_type\n"
- " END MODULE single_invoke_fs_psy")
+ " enddo\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the above loop(s)"
+ "\n"
+ " call f1_proxy%set_dirty()\n"
+ " call f3_proxy%set_dirty()\n"
+ " call f3_proxy%set_clean(1)\n"
+ "\n"
+ " end subroutine invoke_0_testkern_fs_type\n"
+ "\n"
+ "end module single_invoke_fs_psy")
assert output in generated_code
@@ -543,9 +587,12 @@ def test_vector_field(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert ("SUBROUTINE invoke_0_testkern_coord_w0_type(f1, chi, f2)" in
+ assert ("subroutine invoke_0_testkern_coord_w0_type(f1, chi, f2)" in
generated_code)
- assert "TYPE(field_type), intent(in) :: f1, chi(3), f2" in generated_code
+ assert "type(field_type), intent(in) :: f1" in generated_code
+ assert ("type(field_type), dimension(3), intent(in) :: chi"
+ in generated_code)
+ assert "type(field_type), intent(in) :: f2" in generated_code
def test_vector_field_2(tmpdir):
@@ -574,10 +621,10 @@ def test_mkern_invoke_vec_fields():
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
# 1st test for duplication of name vector-field declaration
- assert ("TYPE(field_type), intent(in) :: f1, chi(3), chi(3)"
+ assert ("type(field_type), intent(in) :: f1, chi(3), chi(3)"
not in generated_code)
# 2nd test for duplication of name vector-field declaration
- assert ("TYPE(field_proxy_type) f1_proxy, chi_proxy(3), chi_proxy(3)"
+ assert ("type(field_proxy_type) f1_proxy, chi_proxy(3), chi_proxy(3)"
not in generated_code)
@@ -593,263 +640,281 @@ def test_int_field_fs(tmpdir):
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
generated_code = str(psy.gen)
+ assert """module single_invoke_fs_int_field_psy
+ use constants_mod
+ use integer_field_mod, only : integer_field_proxy_type, integer_field_type
+ implicit none
+ public
+
+ contains
+ subroutine invoke_0_testkern_fs_int_field_type(f1, f2, m1, m2, f3, f4, m3, \
+m4, f5, f6, m5, m6, f7, f8, m7)
+ use mesh_mod, only : mesh_type
+ use testkern_fs_int_field_mod, only : testkern_fs_int_field_code
+ type(integer_field_type), intent(in) :: f1
+ type(integer_field_type), intent(in) :: f2
+ type(integer_field_type), intent(in) :: m1
+ type(integer_field_type), intent(in) :: m2
+ type(integer_field_type), intent(in) :: f3
+ type(integer_field_type), intent(in) :: f4
+ type(integer_field_type), intent(in) :: m3
+ type(integer_field_type), intent(in) :: m4
+ type(integer_field_type), intent(in) :: f5
+ type(integer_field_type), intent(in) :: f6
+ type(integer_field_type), intent(in) :: m5
+ type(integer_field_type), intent(in) :: m6
+ type(integer_field_type), intent(in) :: f7
+ type(integer_field_type), intent(in) :: f8
+ type(integer_field_type), intent(in) :: m7
+ integer(kind=i_def) :: cell
+ integer(kind=i_def) :: loop0_start
+ integer(kind=i_def) :: loop0_stop
+ type(mesh_type), pointer :: mesh => null()
+ integer(kind=i_def) :: max_halo_depth_mesh
+ integer(kind=i_def), pointer, dimension(:) :: f1_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: f2_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: m1_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: m2_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: f3_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: f4_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: m3_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: m4_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: f5_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: f6_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: m5_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: m6_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: f7_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: f8_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: m7_data => null()
+ integer(kind=i_def) :: nlayers_f1
+ integer(kind=i_def) :: ndf_w1
+ integer(kind=i_def) :: undf_w1
+ integer(kind=i_def) :: ndf_w2
+ integer(kind=i_def) :: undf_w2
+ integer(kind=i_def) :: ndf_w0
+ integer(kind=i_def) :: undf_w0
+ integer(kind=i_def) :: ndf_w3
+ integer(kind=i_def) :: undf_w3
+ integer(kind=i_def) :: ndf_wtheta
+ integer(kind=i_def) :: undf_wtheta
+ integer(kind=i_def) :: ndf_w2h
+ integer(kind=i_def) :: undf_w2h
+ integer(kind=i_def) :: ndf_w2v
+ integer(kind=i_def) :: undf_w2v
+ integer(kind=i_def) :: ndf_w2broken
+ integer(kind=i_def) :: undf_w2broken
+ integer(kind=i_def) :: ndf_w2trace
+ integer(kind=i_def) :: undf_w2trace
+ integer(kind=i_def) :: ndf_w2htrace
+ integer(kind=i_def) :: undf_w2htrace
+ integer(kind=i_def) :: ndf_w2vtrace
+ integer(kind=i_def) :: undf_w2vtrace
+ integer(kind=i_def) :: ndf_wchi
+ integer(kind=i_def) :: undf_wchi
+ integer(kind=i_def) :: ndf_any_w2
+ integer(kind=i_def) :: undf_any_w2
+ integer(kind=i_def) :: ndf_aspc1_f8
+ integer(kind=i_def) :: undf_aspc1_f8
+ integer(kind=i_def) :: ndf_adspc1_m7
+ integer(kind=i_def) :: undf_adspc1_m7
+ integer(kind=i_def), pointer :: map_adspc1_m7(:,:) => null()
+ integer(kind=i_def), pointer :: map_any_w2(:,:) => null()
+ integer(kind=i_def), pointer :: map_aspc1_f8(:,:) => null()
+ integer(kind=i_def), pointer :: map_w0(:,:) => null()
+ integer(kind=i_def), pointer :: map_w1(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2broken(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2h(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2htrace(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2trace(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2v(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2vtrace(:,:) => null()
+ integer(kind=i_def), pointer :: map_w3(:,:) => null()
+ integer(kind=i_def), pointer :: map_wchi(:,:) => null()
+ integer(kind=i_def), pointer :: map_wtheta(:,:) => null()
+ type(integer_field_proxy_type) :: f1_proxy
+ type(integer_field_proxy_type) :: f2_proxy
+ type(integer_field_proxy_type) :: m1_proxy
+ type(integer_field_proxy_type) :: m2_proxy
+ type(integer_field_proxy_type) :: f3_proxy
+ type(integer_field_proxy_type) :: f4_proxy
+ type(integer_field_proxy_type) :: m3_proxy
+ type(integer_field_proxy_type) :: m4_proxy
+ type(integer_field_proxy_type) :: f5_proxy
+ type(integer_field_proxy_type) :: f6_proxy
+ type(integer_field_proxy_type) :: m5_proxy
+ type(integer_field_proxy_type) :: m6_proxy
+ type(integer_field_proxy_type) :: f7_proxy
+ type(integer_field_proxy_type) :: f8_proxy
+ type(integer_field_proxy_type) :: m7_proxy
+""" in generated_code
output = (
- " MODULE single_invoke_fs_int_field_psy\n"
- " USE constants_mod, ONLY: i_def\n"
- " USE integer_field_mod, ONLY: integer_field_type, "
- "integer_field_proxy_type\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_testkern_fs_int_field_type(f1, f2, m1, m2, "
- "f3, f4, m3, m4, f5, f6, m5, m6, f7, f8, m7)\n"
- " USE testkern_fs_int_field_mod, ONLY: "
- "testkern_fs_int_field_code\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " TYPE(integer_field_type), intent(in) :: f1, f2, m1, m2, f3, "
- "f4, m3, m4, f5, f6, m5, m6, f7, f8, m7\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: m7_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: f8_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: f7_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: m6_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: m5_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: f6_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: f5_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: m4_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: m3_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: f4_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: f3_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: m2_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: m1_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: f2_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: f1_data => "
- "null()\n"
- " TYPE(integer_field_proxy_type) f1_proxy, f2_proxy, m1_proxy, "
- "m2_proxy, f3_proxy, f4_proxy, m3_proxy, m4_proxy, f5_proxy, "
- "f6_proxy, m5_proxy, m6_proxy, f7_proxy, f8_proxy, m7_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_adspc1_m7(:,:) => null(), "
- "map_any_w2(:,:) => null(), map_aspc1_f8(:,:) => null(), "
- "map_w0(:,:) => null(), map_w1(:,:) => null(), map_w2(:,:) => "
- "null(), map_w2broken(:,:) => null(), map_w2h(:,:) => null(), "
- "map_w2htrace(:,:) => null(), map_w2trace(:,:) => null(), "
- "map_w2v(:,:) => null(), map_w2vtrace(:,:) => null(), map_w3(:,:) "
- "=> null(), map_wchi(:,:) => null(), map_wtheta(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, ndf_w0, "
- "undf_w0, ndf_w3, undf_w3, ndf_wtheta, undf_wtheta, ndf_w2h, "
- "undf_w2h, ndf_w2v, undf_w2v, ndf_w2broken, undf_w2broken, "
- "ndf_w2trace, undf_w2trace, ndf_w2htrace, undf_w2htrace, "
- "ndf_w2vtrace, undf_w2vtrace, ndf_wchi, undf_wchi, ndf_any_w2, "
- "undf_any_w2, ndf_aspc1_f8, undf_aspc1_f8, ndf_adspc1_m7, "
- "undf_adspc1_m7\n"
- " INTEGER(KIND=i_def) max_halo_depth_mesh\n"
- " TYPE(mesh_type), pointer :: mesh => null()\n")
- assert output in generated_code
- output = (
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " f2_proxy = f2%get_proxy()\n"
- " f2_data => f2_proxy%data\n"
- " m1_proxy = m1%get_proxy()\n"
- " m1_data => m1_proxy%data\n"
- " m2_proxy = m2%get_proxy()\n"
- " m2_data => m2_proxy%data\n"
- " f3_proxy = f3%get_proxy()\n"
- " f3_data => f3_proxy%data\n"
- " f4_proxy = f4%get_proxy()\n"
- " f4_data => f4_proxy%data\n"
- " m3_proxy = m3%get_proxy()\n"
- " m3_data => m3_proxy%data\n"
- " m4_proxy = m4%get_proxy()\n"
- " m4_data => m4_proxy%data\n"
- " f5_proxy = f5%get_proxy()\n"
- " f5_data => f5_proxy%data\n"
- " f6_proxy = f6%get_proxy()\n"
- " f6_data => f6_proxy%data\n"
- " m5_proxy = m5%get_proxy()\n"
- " m5_data => m5_proxy%data\n"
- " m6_proxy = m6%get_proxy()\n"
- " m6_data => m6_proxy%data\n"
- " f7_proxy = f7%get_proxy()\n"
- " f7_data => f7_proxy%data\n"
- " f8_proxy = f8%get_proxy()\n"
- " f8_data => f8_proxy%data\n"
- " m7_proxy = m7%get_proxy()\n"
- " m7_data => m7_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
- " !\n"
- " ! Create a mesh object\n"
- " !\n"
- " mesh => f1_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh = mesh%get_halo_depth()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w0 => m1_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
- " map_wtheta => f3_proxy%vspace%get_whole_dofmap()\n"
- " map_w2h => f4_proxy%vspace%get_whole_dofmap()\n"
- " map_w2v => m3_proxy%vspace%get_whole_dofmap()\n"
- " map_w2broken => m4_proxy%vspace%get_whole_dofmap()\n"
- " map_w2trace => f5_proxy%vspace%get_whole_dofmap()\n"
- " map_w2htrace => f6_proxy%vspace%get_whole_dofmap()\n"
- " map_w2vtrace => m5_proxy%vspace%get_whole_dofmap()\n"
- " map_wchi => m6_proxy%vspace%get_whole_dofmap()\n"
- " map_any_w2 => f7_proxy%vspace%get_whole_dofmap()\n"
- " map_aspc1_f8 => f8_proxy%vspace%get_whole_dofmap()\n"
- " map_adspc1_m7 => m7_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
- " undf_w1 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
- " undf_w2 = f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w0\n"
- " !\n"
- " ndf_w0 = m1_proxy%vspace%get_ndf()\n"
- " undf_w0 = m1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = m2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for wtheta\n"
- " !\n"
- " ndf_wtheta = f3_proxy%vspace%get_ndf()\n"
- " undf_wtheta = f3_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2h\n"
- " !\n"
- " ndf_w2h = f4_proxy%vspace%get_ndf()\n"
- " undf_w2h = f4_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2v\n"
- " !\n"
- " ndf_w2v = m3_proxy%vspace%get_ndf()\n"
- " undf_w2v = m3_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2broken\n"
- " !\n"
- " ndf_w2broken = m4_proxy%vspace%get_ndf()\n"
- " undf_w2broken = m4_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2trace\n"
- " !\n"
- " ndf_w2trace = f5_proxy%vspace%get_ndf()\n"
- " undf_w2trace = f5_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2htrace\n"
- " !\n"
- " ndf_w2htrace = f6_proxy%vspace%get_ndf()\n"
- " undf_w2htrace = f6_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2vtrace\n"
- " !\n"
- " ndf_w2vtrace = m5_proxy%vspace%get_ndf()\n"
- " undf_w2vtrace = m5_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for wchi\n"
- " !\n"
- " ndf_wchi = m6_proxy%vspace%get_ndf()\n"
- " undf_wchi = m6_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for any_w2\n"
- " !\n"
- " ndf_any_w2 = f7_proxy%vspace%get_ndf()\n"
- " undf_any_w2 = f7_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for aspc1_f8\n"
- " !\n"
- " ndf_aspc1_f8 = f8_proxy%vspace%get_ndf()\n"
- " undf_aspc1_f8 = f8_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for adspc1_m7\n"
- " !\n"
- " ndf_adspc1_m7 = m7_proxy%vspace%get_ndf()\n"
- " undf_adspc1_m7 = m7_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = mesh%get_last_halo_cell(1)\n"
- " !\n"
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f4_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f4_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m3_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m3_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m4_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m4_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f5_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f5_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f6_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f6_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m5_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m5_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m6_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m6_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f7_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f7_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f8_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f8_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m7_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m7_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_fs_int_field_code(nlayers_f1, f1_data, "
+ " ! Initialise field and/or operator proxies\n"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ " f2_proxy = f2%get_proxy()\n"
+ " f2_data => f2_proxy%data\n"
+ " m1_proxy = m1%get_proxy()\n"
+ " m1_data => m1_proxy%data\n"
+ " m2_proxy = m2%get_proxy()\n"
+ " m2_data => m2_proxy%data\n"
+ " f3_proxy = f3%get_proxy()\n"
+ " f3_data => f3_proxy%data\n"
+ " f4_proxy = f4%get_proxy()\n"
+ " f4_data => f4_proxy%data\n"
+ " m3_proxy = m3%get_proxy()\n"
+ " m3_data => m3_proxy%data\n"
+ " m4_proxy = m4%get_proxy()\n"
+ " m4_data => m4_proxy%data\n"
+ " f5_proxy = f5%get_proxy()\n"
+ " f5_data => f5_proxy%data\n"
+ " f6_proxy = f6%get_proxy()\n"
+ " f6_data => f6_proxy%data\n"
+ " m5_proxy = m5%get_proxy()\n"
+ " m5_data => m5_proxy%data\n"
+ " m6_proxy = m6%get_proxy()\n"
+ " m6_data => m6_proxy%data\n"
+ " f7_proxy = f7%get_proxy()\n"
+ " f7_data => f7_proxy%data\n"
+ " f8_proxy = f8%get_proxy()\n"
+ " f8_data => f8_proxy%data\n"
+ " m7_proxy = m7%get_proxy()\n"
+ " m7_data => m7_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
+ "\n"
+ " ! Create a mesh object\n"
+ " mesh => f1_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh = mesh%get_halo_depth()\n"
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w0 => m1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
+ " map_wtheta => f3_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2h => f4_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2v => m3_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2broken => m4_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2trace => f5_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2htrace => f6_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2vtrace => m5_proxy%vspace%get_whole_dofmap()\n"
+ " map_wchi => m6_proxy%vspace%get_whole_dofmap()\n"
+ " map_any_w2 => f7_proxy%vspace%get_whole_dofmap()\n"
+ " map_aspc1_f8 => f8_proxy%vspace%get_whole_dofmap()\n"
+ " map_adspc1_m7 => m7_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
+ " undf_w1 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2\n"
+ " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
+ " undf_w2 = f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w0\n"
+ " ndf_w0 = m1_proxy%vspace%get_ndf()\n"
+ " undf_w0 = m1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = m2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for wtheta\n"
+ " ndf_wtheta = f3_proxy%vspace%get_ndf()\n"
+ " undf_wtheta = f3_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2h\n"
+ " ndf_w2h = f4_proxy%vspace%get_ndf()\n"
+ " undf_w2h = f4_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2v\n"
+ " ndf_w2v = m3_proxy%vspace%get_ndf()\n"
+ " undf_w2v = m3_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2broken\n"
+ " ndf_w2broken = m4_proxy%vspace%get_ndf()\n"
+ " undf_w2broken = m4_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2trace\n"
+ " ndf_w2trace = f5_proxy%vspace%get_ndf()\n"
+ " undf_w2trace = f5_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2htrace\n"
+ " ndf_w2htrace = f6_proxy%vspace%get_ndf()\n"
+ " undf_w2htrace = f6_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2vtrace\n"
+ " ndf_w2vtrace = m5_proxy%vspace%get_ndf()\n"
+ " undf_w2vtrace = m5_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for wchi\n"
+ " ndf_wchi = m6_proxy%vspace%get_ndf()\n"
+ " undf_wchi = m6_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for any_w2\n"
+ " ndf_any_w2 = f7_proxy%vspace%get_ndf()\n"
+ " undf_any_w2 = f7_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for aspc1_f8\n"
+ " ndf_aspc1_f8 = f8_proxy%vspace%get_ndf()\n"
+ " undf_aspc1_f8 = f8_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for adspc1_m7\n"
+ " ndf_adspc1_m7 = m7_proxy%vspace%get_ndf()\n"
+ " undf_adspc1_m7 = m7_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = mesh%get_last_halo_cell(1)\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=1)) then\n"
+ " call m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f4_proxy%is_dirty(depth=1)) then\n"
+ " call f4_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m3_proxy%is_dirty(depth=1)) then\n"
+ " call m3_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m4_proxy%is_dirty(depth=1)) then\n"
+ " call m4_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f5_proxy%is_dirty(depth=1)) then\n"
+ " call f5_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f6_proxy%is_dirty(depth=1)) then\n"
+ " call f6_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m5_proxy%is_dirty(depth=1)) then\n"
+ " call m5_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m6_proxy%is_dirty(depth=1)) then\n"
+ " call m6_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f7_proxy%is_dirty(depth=1)) then\n"
+ " call f7_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f8_proxy%is_dirty(depth=1)) then\n"
+ " call f8_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m7_proxy%is_dirty(depth=1)) then\n"
+ " call m7_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_fs_int_field_code(nlayers_f1, f1_data, "
"f2_data, m1_data, m2_data, f3_data, "
"f4_data, m3_data, m4_data, f5_data, "
"f6_data, m5_data, m6_data, f7_data, "
@@ -865,21 +930,22 @@ def test_int_field_fs(tmpdir):
"ndf_any_w2, undf_any_w2, map_any_w2(:,cell), ndf_aspc1_f8, "
"undf_aspc1_f8, map_aspc1_f8(:,cell), ndf_adspc1_m7, "
"undf_adspc1_m7, map_adspc1_m7(:,cell))\n"
- " END DO\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the above loop\n"
- " !\n"
- " CALL f2_proxy%set_dirty()\n"
- " CALL f3_proxy%set_dirty()\n"
- " CALL f3_proxy%set_clean(1)\n"
- " CALL f8_proxy%set_dirty()\n"
- " CALL m7_proxy%set_dirty()\n"
- " CALL m7_proxy%set_clean(1)\n"
- " !\n"
- " !\n"
- " END SUBROUTINE invoke_0_testkern_fs_int_field_type\n"
- " END MODULE single_invoke_fs_int_field_psy")
+ " enddo\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the above "
+ "loop(s)\n"
+ " call f2_proxy%set_dirty()\n"
+ " call f3_proxy%set_dirty()\n"
+ " call f3_proxy%set_clean(1)\n"
+ " call f8_proxy%set_dirty()\n"
+ " call m7_proxy%set_dirty()\n"
+ " call m7_proxy%set_clean(1)\n"
+ "\n"
+ " end subroutine invoke_0_testkern_fs_int_field_type\n"
+ "\n"
+ "end module single_invoke_fs_int_field_psy\n")
assert output in generated_code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_int_field_2qr_shapes(dist_mem, tmpdir):
@@ -894,51 +960,56 @@ def test_int_field_2qr_shapes(dist_mem, tmpdir):
"1.1.9_single_invoke_2qr_shapes_int_field.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=dist_mem).create(invoke_info)
- assert LFRicBuild(tmpdir).code_compiles(psy)
- gen_code = str(psy.gen)
+ code = str(psy.gen)
# Check that the qr-related variables are all declared
- assert (" TYPE(quadrature_xyoz_type), intent(in) :: qr_xyoz\n"
- " TYPE(quadrature_face_type), intent(in) :: qr_face\n"
- in gen_code)
- assert ("REAL(KIND=r_def), allocatable :: basis_w2_qr_xyoz(:,:,:,:), "
- "basis_w2_qr_face(:,:,:,:), diff_basis_wchi_qr_xyoz(:,:,:,:), "
- "diff_basis_wchi_qr_face(:,:,:,:), "
- "basis_adspc1_f3_qr_xyoz(:,:,:,:), "
- "diff_basis_adspc1_f3_qr_xyoz(:,:,:,:), "
- "basis_adspc1_f3_qr_face(:,:,:,:), "
- "diff_basis_adspc1_f3_qr_face(:,:,:,:)\n" in gen_code)
- assert (" REAL(KIND=r_def), pointer :: weights_xyz_qr_face(:,:) "
- "=> null()\n"
- " INTEGER(KIND=i_def) np_xyz_qr_face, nfaces_qr_face\n"
- " REAL(KIND=r_def), pointer :: weights_xy_qr_xyoz(:) => "
- "null(), weights_z_qr_xyoz(:) => null()\n"
- " INTEGER(KIND=i_def) np_xy_qr_xyoz, np_z_qr_xyoz\n"
-
- in gen_code)
- assert (" TYPE(quadrature_face_proxy_type) qr_face_proxy\n"
- " TYPE(quadrature_xyoz_proxy_type) qr_xyoz_proxy\n"
- in gen_code)
+ assert (" type(quadrature_xyoz_type), intent(in) :: qr_xyoz\n"
+ " type(quadrature_face_type), intent(in) :: qr_face\n"
+ in code)
+ assert """
+ real(kind=r_def), allocatable :: basis_w2_qr_xyoz(:,:,:,:)
+ real(kind=r_def), allocatable :: basis_w2_qr_face(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_wchi_qr_xyoz(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_wchi_qr_face(:,:,:,:)
+ real(kind=r_def), allocatable :: basis_adspc1_f3_qr_xyoz(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_adspc1_f3_qr_xyoz(:,:,:,:)
+ real(kind=r_def), allocatable :: basis_adspc1_f3_qr_face(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_adspc1_f3_qr_face(:,:,:,:)
+""" in code
+ assert (" real(kind=r_def), pointer, dimension(:,:) :: "
+ "weights_xyz_qr_face => null()\n" in code)
+ assert " integer(kind=i_def) :: np_xyz_qr_face\n" in code
+ assert " integer(kind=i_def) :: nfaces_qr_face\n" in code
+ assert (
+ " integer(kind=i_def) :: np_xy_qr_xyoz\n"
+ " integer(kind=i_def) :: np_z_qr_xyoz\n"
+ " real(kind=r_def), pointer :: weights_xy_qr_xyoz(:) => "
+ "null()\n"
+ " real(kind=r_def), pointer :: weights_z_qr_xyoz(:) => "
+ "null()\n"
+ in code)
+ assert "type(quadrature_face_proxy_type) :: qr_face_proxy\n" in code
+ assert "type(quadrature_xyoz_proxy_type) :: qr_xyoz_proxy\n" in code
# Allocation and computation of (some of) the basis/differential
# basis functions
- assert (" ALLOCATE (basis_adspc1_f3_qr_xyoz(dim_adspc1_f3, "
- "ndf_adspc1_f3, np_xy_qr_xyoz, np_z_qr_xyoz))\n"
- " ALLOCATE (diff_basis_adspc1_f3_qr_xyoz(diff_dim_adspc1_f3, "
- "ndf_adspc1_f3, np_xy_qr_xyoz, np_z_qr_xyoz))\n"
- " ALLOCATE (basis_adspc1_f3_qr_face(dim_adspc1_f3, "
- "ndf_adspc1_f3, np_xyz_qr_face, nfaces_qr_face))\n"
- " ALLOCATE (diff_basis_adspc1_f3_qr_face(diff_dim_adspc1_f3, "
- "ndf_adspc1_f3, np_xyz_qr_face, nfaces_qr_face))\n"
- in gen_code)
- assert (" CALL qr_xyoz%compute_function(BASIS, f3_proxy%vspace, "
+ assert (" ALLOCATE(basis_adspc1_f3_qr_xyoz(dim_adspc1_f3,"
+ "ndf_adspc1_f3,np_xy_qr_xyoz,np_z_qr_xyoz))\n"
+ " ALLOCATE(diff_basis_adspc1_f3_qr_xyoz(diff_dim_adspc1_f3,"
+ "ndf_adspc1_f3,np_xy_qr_xyoz,np_z_qr_xyoz))\n"
+ " ALLOCATE(basis_adspc1_f3_qr_face(dim_adspc1_f3,"
+ "ndf_adspc1_f3,np_xyz_qr_face,nfaces_qr_face))\n"
+ " ALLOCATE(diff_basis_adspc1_f3_qr_face(diff_dim_adspc1_f3,"
+ "ndf_adspc1_f3,np_xyz_qr_face,nfaces_qr_face))\n"
+ in code)
+ assert (" call qr_xyoz%compute_function(BASIS, f3_proxy%vspace, "
"dim_adspc1_f3, ndf_adspc1_f3, basis_adspc1_f3_qr_xyoz)\n"
- " CALL qr_xyoz%compute_function(DIFF_BASIS, "
+ " call qr_xyoz%compute_function(DIFF_BASIS, "
"f3_proxy%vspace, diff_dim_adspc1_f3, ndf_adspc1_f3, "
"diff_basis_adspc1_f3_qr_xyoz)\n"
- " CALL qr_face%compute_function(BASIS, f3_proxy%vspace, "
+ " call qr_face%compute_function(BASIS, f3_proxy%vspace, "
"dim_adspc1_f3, ndf_adspc1_f3, basis_adspc1_f3_qr_face)\n"
- " CALL qr_face%compute_function(DIFF_BASIS, "
+ " call qr_face%compute_function(DIFF_BASIS, "
"f3_proxy%vspace, diff_dim_adspc1_f3, ndf_adspc1_f3, "
- "diff_basis_adspc1_f3_qr_face)\n" in gen_code)
+ "diff_basis_adspc1_f3_qr_face)\n" in code)
# Check that the kernel call itself is correct
assert (
"testkern_2qr_int_field_code(nlayers_f1, f1_data, "
@@ -950,7 +1021,8 @@ def test_int_field_2qr_shapes(dist_mem, tmpdir):
"basis_adspc1_f3_qr_face, diff_basis_adspc1_f3_qr_xyoz, "
"diff_basis_adspc1_f3_qr_face, np_xy_qr_xyoz, np_z_qr_xyoz, "
"weights_xy_qr_xyoz, weights_z_qr_xyoz, nfaces_qr_face, "
- "np_xyz_qr_face, weights_xyz_qr_face)\n" in gen_code)
+ "np_xyz_qr_face, weights_xyz_qr_face)\n" in code)
+ assert LFRicBuild(tmpdir).code_compiles(psy)
# Tests for Invokes calling kernels that contain real- and
@@ -969,119 +1041,116 @@ def test_int_real_field_fs(dist_mem, tmpdir):
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=dist_mem).create(invoke_info)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
generated_code = str(psy.gen)
output = (
- " MODULE multikernel_invokes_real_int_field_fs_psy\n"
- " USE constants_mod, ONLY: r_def, i_def\n"
- " USE field_mod, ONLY: field_type, field_proxy_type\n"
- " USE integer_field_mod, ONLY: integer_field_type, "
- "integer_field_proxy_type\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_integer_and_real_field(i1, i2, n1, n2, i3, "
+ "module multikernel_invokes_real_int_field_fs_psy\n"
+ " use constants_mod\n"
+ " use integer_field_mod, only : integer_field_proxy_type, "
+ "integer_field_type\n"
+ " use field_mod, only : field_proxy_type, field_type\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_integer_and_real_field(i1, i2, n1, n2, i3, "
"i4, n3, n4, i5, i6, n5, n6, i7, i8, n7, f1, f2, m1, m2, f3, f4, "
"m3, m4, f5, f6, m5, m6, m7)\n")
assert output in generated_code
- output = (
- " TYPE(field_type), intent(in) :: f1, f2, m1, m2, f3, f4, "
- "m3, m4, f5, f6, m5, m6, m7\n"
- " TYPE(integer_field_type), intent(in) :: i1, i2, n1, n2, "
- "i3, i4, n3, n4, i5, i6, n5, n6, i7, i8, n7\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop1_start, loop1_stop\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_f1, nlayers_i1\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: n7_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: i8_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: i7_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: n6_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: n5_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: i6_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: i5_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: n4_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: n3_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: i4_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: i3_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: n2_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: n1_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: i2_data => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer, dimension(:) :: i1_data => "
- "null()\n"
- " TYPE(integer_field_proxy_type) i1_proxy, i2_proxy, n1_proxy, "
- "n2_proxy, i3_proxy, i4_proxy, n3_proxy, n4_proxy, i5_proxy, "
- "i6_proxy, n5_proxy, n6_proxy, i7_proxy, i8_proxy, n7_proxy\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m7_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m6_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m5_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f6_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f5_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m4_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m3_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f4_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f3_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, f2_proxy, m1_proxy, "
- "m2_proxy, f3_proxy, f4_proxy, m3_proxy, m4_proxy, f5_proxy, "
- "f6_proxy, m5_proxy, m6_proxy, m7_proxy\n")
- assert output in generated_code
+ assert """
+ type(integer_field_type), intent(in) :: i1
+ type(integer_field_type), intent(in) :: i2
+ type(integer_field_type), intent(in) :: n1
+ type(integer_field_type), intent(in) :: n2
+ type(integer_field_type), intent(in) :: i3
+ type(integer_field_type), intent(in) :: i4
+ type(integer_field_type), intent(in) :: n3
+ type(integer_field_type), intent(in) :: n4
+ type(integer_field_type), intent(in) :: i5
+ type(integer_field_type), intent(in) :: i6
+ type(integer_field_type), intent(in) :: n5
+ type(integer_field_type), intent(in) :: n6
+ type(integer_field_type), intent(in) :: i7
+ type(integer_field_type), intent(in) :: i8
+ type(integer_field_type), intent(in) :: n7
+ type(field_type), intent(in) :: f1
+ type(field_type), intent(in) :: f2
+ type(field_type), intent(in) :: m1
+ type(field_type), intent(in) :: m2
+ type(field_type), intent(in) :: f3
+ type(field_type), intent(in) :: f4
+ type(field_type), intent(in) :: m3
+ type(field_type), intent(in) :: m4
+ type(field_type), intent(in) :: f5
+ type(field_type), intent(in) :: f6
+ type(field_type), intent(in) :: m5
+ type(field_type), intent(in) :: m6
+ type(field_type), intent(in) :: m7
+ """ in generated_code
+ assert """
+ real(kind=r_def), pointer, dimension(:) :: f1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f2_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m2_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f3_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f4_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m3_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m4_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f5_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f6_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m5_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m6_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m7_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: i1_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: i2_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: n1_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: n2_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: i3_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: i4_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: n3_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: n4_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: i5_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: i6_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: n5_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: n6_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: i7_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: i8_data => null()
+ integer(kind=i_def), pointer, dimension(:) :: n7_data => null()
+""" in generated_code
# Number of layers and the mesh are determined from the first integer
# field. Maps for function spaces are determined from the first kernel
# call with integer fields
output = (
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
- " nlayers_i1 = i1_proxy%vspace%get_nlayers()\n"
- " !\n")
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
+ " nlayers_i1 = i1_proxy%vspace%get_nlayers()\n"
+ "\n")
if dist_mem:
output += (
- " ! Create a mesh object\n"
- " !\n"
- " mesh => i1_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh = mesh%get_halo_depth()\n"
- " !\n")
+ " ! Create a mesh object\n"
+ " mesh => i1_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh = mesh%get_halo_depth()\n"
+ "\n")
output += (
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => i1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => i2_proxy%vspace%get_whole_dofmap()\n"
- " map_w0 => n1_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => n2_proxy%vspace%get_whole_dofmap()\n"
- " map_wtheta => i3_proxy%vspace%get_whole_dofmap()\n"
- " map_w2h => i4_proxy%vspace%get_whole_dofmap()\n"
- " map_w2v => n3_proxy%vspace%get_whole_dofmap()\n"
- " map_w2broken => n4_proxy%vspace%get_whole_dofmap()\n"
- " map_w2trace => i5_proxy%vspace%get_whole_dofmap()\n"
- " map_w2htrace => i6_proxy%vspace%get_whole_dofmap()\n"
- " map_w2vtrace => n5_proxy%vspace%get_whole_dofmap()\n"
- " map_wchi => n6_proxy%vspace%get_whole_dofmap()\n"
- " map_any_w2 => i7_proxy%vspace%get_whole_dofmap()\n"
- " map_aspc1_i8 => i8_proxy%vspace%get_whole_dofmap()\n"
- " map_adspc1_n7 => n7_proxy%vspace%get_whole_dofmap()\n")
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => i1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => i2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w0 => n1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => n2_proxy%vspace%get_whole_dofmap()\n"
+ " map_wtheta => i3_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2h => i4_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2v => n3_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2broken => n4_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2trace => i5_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2htrace => i6_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2vtrace => n5_proxy%vspace%get_whole_dofmap()\n"
+ " map_wchi => n6_proxy%vspace%get_whole_dofmap()\n"
+ " map_any_w2 => i7_proxy%vspace%get_whole_dofmap()\n"
+ " map_aspc1_i8 => i8_proxy%vspace%get_whole_dofmap()\n"
+ " map_adspc1_n7 => n7_proxy%vspace%get_whole_dofmap()\n")
assert output in generated_code
# Kernel calls are the same regardless of distributed memory
kern1_call = (
- " CALL testkern_fs_int_field_code(nlayers_i1, i1_data, "
+ " call testkern_fs_int_field_code(nlayers_i1, i1_data, "
"i2_data, n1_data, n2_data, i3_data, "
"i4_data, n3_data, n4_data, i5_data, "
"i6_data, n5_data, n6_data, i7_data, "
@@ -1099,7 +1168,7 @@ def test_int_real_field_fs(dist_mem, tmpdir):
"undf_adspc1_n7, map_adspc1_n7(:,cell))\n")
assert kern1_call in generated_code
kern2_call = (
- " CALL testkern_fs_code(nlayers_f1, f1_data, f2_data, "
+ " call testkern_fs_code(nlayers_f1, f1_data, f2_data, "
"m1_data, m2_data, f3_data, f4_data, "
"m3_data, m4_data, f5_data, f6_data, "
"m5_data, m6_data, m7_data, ndf_w1, undf_w1, "
@@ -1123,15 +1192,17 @@ def test_int_real_field_fs(dist_mem, tmpdir):
# Check that the field halo flags after the kernel calls
if dist_mem:
halo1_flags = (
- " CALL i2_proxy%set_dirty()\n"
- " CALL i3_proxy%set_dirty()\n"
- " CALL i3_proxy%set_clean(1)\n"
- " CALL i8_proxy%set_dirty()\n"
- " CALL n7_proxy%set_dirty()\n"
- " CALL n7_proxy%set_clean(1)\n")
+ " call i2_proxy%set_dirty()\n"
+ " call i3_proxy%set_dirty()\n"
+ " call i3_proxy%set_clean(1)\n"
+ " call i8_proxy%set_dirty()\n"
+ " call n7_proxy%set_dirty()\n"
+ " call n7_proxy%set_clean(1)\n")
halo2_flags = (
- " CALL f1_proxy%set_dirty()\n"
- " CALL f3_proxy%set_dirty()\n"
- " CALL f3_proxy%set_clean(1)\n")
+ " call f1_proxy%set_dirty()\n"
+ " call f3_proxy%set_dirty()\n"
+ " call f3_proxy%set_clean(1)\n")
assert halo1_flags in generated_code
assert halo2_flags in generated_code
+
+ assert LFRicBuild(tmpdir).code_compiles(psy)
diff --git a/src/psyclone/tests/domain/lfric/lfric_field_mdata_test.py b/src/psyclone/tests/domain/lfric/lfric_field_mdata_test.py
index b7c5ea24b3..f189801ece 100644
--- a/src/psyclone/tests/domain/lfric/lfric_field_mdata_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_field_mdata_test.py
@@ -48,7 +48,6 @@
from psyclone.core.access_type import AccessType
from psyclone.domain.lfric import (LFRicArgDescriptor, LFRicConstants,
LFRicFields, LFRicKernMetadata)
-from psyclone.f2pygen import ModuleGen
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSyFactory
from psyclone.parse.utils import ParseError
@@ -501,7 +500,7 @@ def test_lfricfields_call_err():
fld_arg = kernel.arguments.args[0]
fld_arg._intrinsic_type = "triple-type"
with pytest.raises(InternalError) as err:
- LFRicFields(invoke)._invoke_declarations(ModuleGen(name="my_mod"))
+ LFRicFields(invoke).invoke_declarations()
test_str = str(err.value)
assert ("Found unsupported intrinsic types for the field arguments "
"['f1'] to Invoke 'invoke_0_testkern_fs_type'. Supported "
diff --git a/src/psyclone/tests/domain/lfric/lfric_field_stubgen_test.py b/src/psyclone/tests/domain/lfric/lfric_field_stubgen_test.py
index 58abc8fde3..fe6a0fbbd5 100644
--- a/src/psyclone/tests/domain/lfric/lfric_field_stubgen_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_field_stubgen_test.py
@@ -41,15 +41,12 @@
functionality for the LFRic fields.
'''
-# Imports
-from __future__ import absolute_import, print_function
import os
import pytest
import fparser
from fparser import api as fpapi
from psyclone.domain.lfric import (LFRicConstants, LFRicKern,
LFRicFields, LFRicKernMetadata)
-from psyclone.f2pygen import ModuleGen, SubroutineGen
from psyclone.errors import InternalError
@@ -100,16 +97,12 @@ def test_lfricfields_stub_err():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- # Create an empty Kernel stub module and subroutine objects
- psy_module = ModuleGen("testkern_2qr_int_field_mod")
- sub_stub = SubroutineGen(psy_module, name="testkern_2qr_int_field_code",
- implicitnone=True)
+
# Sabotage the field argument to make it have an invalid intrinsic type
fld_arg = kernel.arguments.args[1]
fld_arg.descriptor._data_type = "gh_invalid_type"
- print(fld_arg.descriptor._data_type)
with pytest.raises(InternalError) as err:
- LFRicFields(kernel)._stub_declarations(sub_stub)
+ LFRicFields(kernel).stub_declarations()
const = LFRicConstants()
assert (f"Found an unsupported data type 'gh_invalid_type' in "
f"kernel stub declarations for the field argument 'field_2'. "
@@ -145,7 +138,7 @@ def test_lfricfields_stub_err():
'''
-def test_int_field_gen_stub():
+def test_int_field_gen_stub(fortran_writer):
''' Test that we generate correct code for kernel stubs that
contain integer-valued fields with stencils and basis/differential
basis functions.
@@ -155,62 +148,60 @@ def test_int_field_gen_stub():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
- output = (
- " MODULE testkern_int_field_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE testkern_int_field_code(nlayers, field_1_wtheta, "
- "field_2_w3_v1, field_2_w3_v2, field_2_w3_v3, field_3_w2trace, "
- "field_3_stencil_size, field_3_stencil_dofmap, ndf_wtheta, "
- "undf_wtheta, map_wtheta, basis_wtheta_qr_xyoz, ndf_w3, undf_w3, "
- "map_w3, basis_w3_qr_xyoz, diff_basis_w3_qr_xyoz, ndf_w2trace, "
- "undf_w2trace, map_w2trace, np_xy_qr_xyoz, np_z_qr_xyoz, "
- "weights_xy_qr_xyoz, weights_z_qr_xyoz)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2trace\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2trace) :: "
- "map_w2trace\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w3\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w3) :: map_w3\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wtheta\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wtheta) :: "
- "map_wtheta\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_wtheta, undf_w3, "
- "undf_w2trace\n"
- " INTEGER(KIND=i_def), intent(inout), dimension(undf_wtheta) :: "
- "field_1_wtheta\n"
- " INTEGER(KIND=i_def), intent(in), dimension(undf_w3) :: "
- "field_2_w3_v1\n"
- " INTEGER(KIND=i_def), intent(in), dimension(undf_w3) :: "
- "field_2_w3_v2\n"
- " INTEGER(KIND=i_def), intent(in), dimension(undf_w3) :: "
- "field_2_w3_v3\n"
- " INTEGER(KIND=i_def), intent(in), dimension(undf_w2trace) :: "
- "field_3_w2trace\n"
- " INTEGER(KIND=i_def), intent(in) :: field_3_stencil_size\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2trace,"
- "field_3_stencil_size) :: field_3_stencil_dofmap\n"
- " INTEGER(KIND=i_def), intent(in) :: "
- "np_xy_qr_xyoz, np_z_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_wtheta,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_wtheta_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w3,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w3_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w3,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w3_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_xy_qr_xyoz) :: "
- "weights_xy_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_z_qr_xyoz) :: "
- "weights_z_qr_xyoz\n"
- " END SUBROUTINE testkern_int_field_code\n"
- " END MODULE testkern_int_field_mod")
- assert output in generated_code
-
-
-def test_int_field_all_stencils_gen_stub():
+ generated_code = fortran_writer(kernel.gen_stub)
+ output = """\
+module testkern_int_field_mod
+ implicit none
+ public
+
+ contains
+ subroutine testkern_int_field_code(nlayers, field_1_wtheta, field_2_w3_v1, \
+field_2_w3_v2, field_2_w3_v3, field_3_w2trace, field_3_stencil_size, \
+field_3_stencil_dofmap, ndf_wtheta, undf_wtheta, map_wtheta, \
+basis_wtheta_qr_xyoz, ndf_w3, undf_w3, map_w3, basis_w3_qr_xyoz, \
+diff_basis_w3_qr_xyoz, ndf_w2trace, undf_w2trace, map_w2trace, \
+np_xy_qr_xyoz, np_z_qr_xyoz, weights_xy_qr_xyoz, weights_z_qr_xyoz)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w2trace
+ integer(kind=i_def), dimension(ndf_w2trace), intent(in) :: map_w2trace
+ integer(kind=i_def), intent(in) :: ndf_w3
+ integer(kind=i_def), dimension(ndf_w3), intent(in) :: map_w3
+ integer(kind=i_def), intent(in) :: ndf_wtheta
+ integer(kind=i_def), dimension(ndf_wtheta), intent(in) :: map_wtheta
+ integer(kind=i_def), intent(in) :: undf_wtheta
+ integer(kind=i_def), intent(in) :: undf_w3
+ integer(kind=i_def), intent(in) :: undf_w2trace
+ integer(kind=i_def), dimension(undf_wtheta), intent(inout) :: \
+field_1_wtheta
+ integer(kind=i_def), dimension(undf_w3), intent(in) :: field_2_w3_v1
+ integer(kind=i_def), dimension(undf_w3), intent(in) :: field_2_w3_v2
+ integer(kind=i_def), dimension(undf_w3), intent(in) :: field_2_w3_v3
+ integer(kind=i_def), dimension(undf_w2trace), intent(in) :: field_3_w2trace
+ integer(kind=i_def), intent(in) :: field_3_stencil_size
+ integer(kind=i_def), dimension(ndf_w2trace,field_3_stencil_size), \
+intent(in) :: field_3_stencil_dofmap
+ integer(kind=i_def), intent(in) :: np_xy_qr_xyoz
+ integer(kind=i_def), intent(in) :: np_z_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_wtheta,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_wtheta_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_w3,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w3_qr_xyoz
+ real(kind=r_def), dimension(3,ndf_w3,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: diff_basis_w3_qr_xyoz
+ real(kind=r_def), dimension(np_xy_qr_xyoz), intent(in) :: \
+weights_xy_qr_xyoz
+ real(kind=r_def), dimension(np_z_qr_xyoz), intent(in) :: weights_z_qr_xyoz
+
+
+ end subroutine testkern_int_field_code
+
+end module testkern_int_field_mod
+"""
+ assert output == generated_code
+
+
+def test_int_field_all_stencils_gen_stub(fortran_writer):
''' Test that we generate correct code for kernel stubs that
contain integer-valued fields with all supported stencil accesses. '''
ast = fpapi.parse(
@@ -219,59 +210,61 @@ def test_int_field_all_stencils_gen_stub():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
- output = (
- " MODULE testkern_stencil_multi_int_field_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE testkern_stencil_multi_int_field_code(nlayers, "
- "field_1_w2broken, field_2_w1, field_2_stencil_size, "
- "field_2_stencil_dofmap, field_3_w0, field_3_stencil_size, "
- "field_3_direction, field_3_stencil_dofmap, field_4_w2v, "
- "field_4_stencil_size, field_4_stencil_dofmap, ndf_w2broken, "
- "undf_w2broken, map_w2broken, ndf_w1, undf_w1, map_w1, "
- "ndf_w0, undf_w0, map_w0, ndf_w2v, undf_w2v, map_w2v)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w0\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w0) :: map_w0\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w1\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w1) :: map_w1\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2broken\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2broken) :: "
- "map_w2broken\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2v\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2v) :: "
- "map_w2v\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w2broken, undf_w1, "
- "undf_w0, undf_w2v\n"
- " INTEGER(KIND=i_def), intent(inout), "
- "dimension(undf_w2broken) :: field_1_w2broken\n"
- " INTEGER(KIND=i_def), intent(in), dimension(undf_w1) :: "
- "field_2_w1\n"
- " INTEGER(KIND=i_def), intent(in), dimension(undf_w0) :: "
- "field_3_w0\n"
- " INTEGER(KIND=i_def), intent(in), dimension(undf_w2v) :: "
- "field_4_w2v\n"
- " INTEGER(KIND=i_def), intent(in) :: field_2_stencil_size, "
- "field_3_stencil_size, field_4_stencil_size\n"
- " INTEGER(KIND=i_def), intent(in) :: field_3_direction\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w1,"
- "field_2_stencil_size) :: field_2_stencil_dofmap\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w0,"
- "field_3_stencil_size) :: field_3_stencil_dofmap\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2v,"
- "field_4_stencil_size) :: field_4_stencil_dofmap\n"
- " END SUBROUTINE testkern_stencil_multi_int_field_code\n"
- " END MODULE testkern_stencil_multi_int_field_mod")
- assert output in generated_code
+ generated_code = fortran_writer(kernel.gen_stub)
+ output = """\
+module testkern_stencil_multi_int_field_mod
+ implicit none
+ public
+
+ contains
+ subroutine testkern_stencil_multi_int_field_code(nlayers, field_1_w2broken, \
+field_2_w1, field_2_stencil_size, field_2_stencil_dofmap, field_3_w0, \
+field_3_stencil_size, field_3_direction, field_3_stencil_dofmap, field_4_w2v, \
+field_4_stencil_size, field_4_stencil_dofmap, ndf_w2broken, undf_w2broken, \
+map_w2broken, ndf_w1, undf_w1, map_w1, ndf_w0, undf_w0, map_w0, ndf_w2v, \
+undf_w2v, map_w2v)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w0
+ integer(kind=i_def), dimension(ndf_w0), intent(in) :: map_w0
+ integer(kind=i_def), intent(in) :: ndf_w1
+ integer(kind=i_def), dimension(ndf_w1), intent(in) :: map_w1
+ integer(kind=i_def), intent(in) :: ndf_w2broken
+ integer(kind=i_def), dimension(ndf_w2broken), intent(in) :: map_w2broken
+ integer(kind=i_def), intent(in) :: ndf_w2v
+ integer(kind=i_def), dimension(ndf_w2v), intent(in) :: map_w2v
+ integer(kind=i_def), intent(in) :: undf_w2broken
+ integer(kind=i_def), intent(in) :: undf_w1
+ integer(kind=i_def), intent(in) :: undf_w0
+ integer(kind=i_def), intent(in) :: undf_w2v
+ integer(kind=i_def), dimension(undf_w2broken), intent(inout) :: \
+field_1_w2broken
+ integer(kind=i_def), dimension(undf_w1), intent(in) :: field_2_w1
+ integer(kind=i_def), dimension(undf_w0), intent(in) :: field_3_w0
+ integer(kind=i_def), dimension(undf_w2v), intent(in) :: field_4_w2v
+ integer(kind=i_def), intent(in) :: field_2_stencil_size
+ integer(kind=i_def), intent(in) :: field_3_stencil_size
+ integer(kind=i_def), intent(in) :: field_4_stencil_size
+ integer(kind=i_def), intent(in) :: field_3_direction
+ integer(kind=i_def), dimension(ndf_w1,field_2_stencil_size), intent(in) \
+:: field_2_stencil_dofmap
+ integer(kind=i_def), dimension(ndf_w0,field_3_stencil_size), intent(in) \
+:: field_3_stencil_dofmap
+ integer(kind=i_def), dimension(ndf_w2v,field_4_stencil_size), intent(in) \
+:: field_4_stencil_dofmap
+
+
+ end subroutine testkern_stencil_multi_int_field_code
+
+end module testkern_stencil_multi_int_field_mod
+"""
+ assert output == generated_code
# Tests for kernel stubs containing real- and integer-valued fields
-def test_real_int_field_gen_stub():
+def test_real_int_field_gen_stub(fortran_writer):
''' Test that we generate correct code for kernel stubs that
contain real- and integer-valued fields with basis and differential
basis functions on one real- and one integer-valued field.
@@ -284,55 +277,56 @@ def test_real_int_field_gen_stub():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
- output = (
- " MODULE testkern_field_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE testkern_field_code(nlayers, rscalar_1, field_2_w1, "
- "field_3_w2, field_4_wtheta, field_5_w3, iscalar_6, ndf_w1, undf_w1, "
- "map_w1, basis_w1_qr_xyoz, diff_basis_w1_qr_xyoz, ndf_w2, undf_w2, "
- "map_w2, ndf_wtheta, undf_wtheta, map_wtheta, ndf_w3, undf_w3, "
- "map_w3, basis_w3_qr_xyoz, diff_basis_w3_qr_xyoz, np_xy_qr_xyoz, "
- "np_z_qr_xyoz, weights_xy_qr_xyoz, weights_z_qr_xyoz)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w1\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w1) :: map_w1\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2) :: map_w2\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w3\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w3) :: map_w3\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wtheta\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wtheta) :: "
- "map_wtheta\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w1, undf_w2, "
- "undf_wtheta, undf_w3\n"
- " REAL(KIND=r_def), intent(in) :: rscalar_1\n"
- " INTEGER(KIND=i_def), intent(in) :: iscalar_6\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w1) :: "
- "field_2_w1\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2) :: "
- "field_3_w2\n"
- " INTEGER(KIND=i_def), intent(inout), dimension(undf_wtheta) :: "
- "field_4_wtheta\n"
- " INTEGER(KIND=i_def), intent(in), dimension(undf_w3) :: "
- "field_5_w3\n"
- " INTEGER(KIND=i_def), intent(in) :: np_xy_qr_xyoz, "
- "np_z_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w1,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w1_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w1,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w1_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w3,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w3_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w3,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w3_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_xy_qr_xyoz) :: "
- "weights_xy_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_z_qr_xyoz) :: "
- "weights_z_qr_xyoz\n"
- " END SUBROUTINE testkern_field_code\n"
- " END MODULE testkern_field_mod")
- assert output in generated_code
+ generated_code = fortran_writer(kernel.gen_stub)
+ assert """\
+module testkern_field_mod
+ implicit none
+ public
+
+ contains
+ subroutine testkern_field_code(nlayers, rscalar_1, field_2_w1, field_3_w2, \
+field_4_wtheta, field_5_w3, iscalar_6, ndf_w1, undf_w1, map_w1, \
+basis_w1_qr_xyoz, diff_basis_w1_qr_xyoz, ndf_w2, undf_w2, map_w2, ndf_wtheta, \
+undf_wtheta, map_wtheta, ndf_w3, undf_w3, map_w3, basis_w3_qr_xyoz, \
+diff_basis_w3_qr_xyoz, np_xy_qr_xyoz, np_z_qr_xyoz, weights_xy_qr_xyoz, \
+weights_z_qr_xyoz)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w1
+ integer(kind=i_def), dimension(ndf_w1), intent(in) :: map_w1
+ integer(kind=i_def), intent(in) :: ndf_w2
+ integer(kind=i_def), dimension(ndf_w2), intent(in) :: map_w2
+ integer(kind=i_def), intent(in) :: ndf_w3
+ integer(kind=i_def), dimension(ndf_w3), intent(in) :: map_w3
+ integer(kind=i_def), intent(in) :: ndf_wtheta
+ integer(kind=i_def), dimension(ndf_wtheta), intent(in) :: map_wtheta
+ integer(kind=i_def), intent(in) :: undf_w1
+ integer(kind=i_def), intent(in) :: undf_w2
+ integer(kind=i_def), intent(in) :: undf_wtheta
+ integer(kind=i_def), intent(in) :: undf_w3
+ real(kind=r_def), intent(in) :: rscalar_1
+ integer(kind=i_def), intent(in) :: iscalar_6
+ real(kind=r_def), dimension(undf_w1), intent(inout) :: field_2_w1
+ real(kind=r_def), dimension(undf_w2), intent(in) :: field_3_w2
+ integer(kind=i_def), dimension(undf_wtheta), intent(inout) :: \
+field_4_wtheta
+ integer(kind=i_def), dimension(undf_w3), intent(in) :: field_5_w3
+ integer(kind=i_def), intent(in) :: np_xy_qr_xyoz
+ integer(kind=i_def), intent(in) :: np_z_qr_xyoz
+ real(kind=r_def), dimension(3,ndf_w1,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w1_qr_xyoz
+ real(kind=r_def), dimension(3,ndf_w1,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: diff_basis_w1_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_w3,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w3_qr_xyoz
+ real(kind=r_def), dimension(3,ndf_w3,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: diff_basis_w3_qr_xyoz
+ real(kind=r_def), dimension(np_xy_qr_xyoz), intent(in) :: \
+weights_xy_qr_xyoz
+ real(kind=r_def), dimension(np_z_qr_xyoz), intent(in) :: \
+weights_z_qr_xyoz
+
+
+ end subroutine testkern_field_code
+
+end module testkern_field_mod\n""" == generated_code
diff --git a/src/psyclone/tests/domain/lfric/lfric_halo_depths_test.py b/src/psyclone/tests/domain/lfric/lfric_halo_depths_test.py
index b766b8523b..ac93680c3a 100644
--- a/src/psyclone/tests/domain/lfric/lfric_halo_depths_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_halo_depths_test.py
@@ -40,7 +40,6 @@
import pytest
from psyclone.domain.lfric import LFRicHaloDepths, LFRicKern
-from psyclone.f2pygen import ModuleGen, SubroutineGen
from psyclone.psyir.nodes import BinaryOperation, Literal
from psyclone.psyir.symbols import DataSymbol, INTEGER_TYPE
from psyclone.tests.utilities import get_invoke
@@ -84,28 +83,29 @@ def test_lfric_halo_depth_invoke_declns():
_, invoke3 = get_invoke("1.4.2_multi_into_halos_invoke.f90", API,
dist_mem=False, idx=0)
hdepths3 = LFRicHaloDepths(invoke3)
- mymod = ModuleGen("test_mod")
- mysub = SubroutineGen(mymod, name="test_sub")
- hdepths3._invoke_declarations(mysub)
- assert not mysub.children
+ hdepths3.invoke_declarations()
+ args = [x.name for x in invoke3.schedule.symbol_table.argument_datasymbols]
+ assert "depth" not in args
+
# Now with distributed memory - should have two halo-depth arguments.
_, invoke4 = get_invoke("1.4.2_multi_into_halos_invoke.f90", API,
dist_mem=True, idx=0)
hdepths4 = LFRicHaloDepths(invoke4)
- hdepths4._invoke_declarations(mysub)
- assert ("integer, intent(in) :: hdepth, other_depth"
- in str(mysub.root).lower())
+ hdepths4.invoke_declarations()
+ args = [x.name for x in invoke4.schedule.symbol_table.argument_datasymbols]
+ assert "hdepth" in args
+ assert "other_depth" in args
def test_lfric_halo_depth_no_stub_gen():
'''
- Test that the _stub_declarations() method does nothing (because whether
+ Test that the stub_declarations() method does nothing (because whether
or not a kernel operates on halo cells does not affect the signature).
'''
_, invoke2 = get_invoke("1.4.2_multi_into_halos_invoke.f90", API, idx=0)
- hdepths2 = LFRicHaloDepths(invoke2)
- hdepths2._stub_declarations(None)
+ hdepths2 = LFRicHaloDepths(invoke2.schedule.kernels()[0])
+ hdepths2.stub_declarations()
def test_no_exprn_for_halo_depth():
diff --git a/src/psyclone/tests/domain/lfric/lfric_kern_test.py b/src/psyclone/tests/domain/lfric/lfric_kern_test.py
index 366f72e08e..b6f803c145 100644
--- a/src/psyclone/tests/domain/lfric/lfric_kern_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_kern_test.py
@@ -391,11 +391,17 @@ def test_kern_last_cell_all_colours():
# Apply a colouring transformation to the loop.
trans = Dynamo0p3ColourTrans()
trans.apply(loop)
- # We have to perform code generation as that sets-up the symbol table.
- # pylint:disable=pointless-statement
- psy.gen
- assert (loop.kernel.last_cell_all_colours_symbol.name
- == "last_halo_cell_all_colours")
+
+ symbol = loop.kernel.last_cell_all_colours_symbol
+ assert symbol.name == "last_halo_cell_all_colours"
+ assert len(symbol.datatype.shape) == 2 # It's a 2-dimensional array
+
+ # Delete the symbols and try again inside a loop wihtout a halo
+ sched.symbol_table._symbols.pop("last_halo_cell_all_colours")
+ loop.kernel.parent.parent._upper_bound_name = "not-a-halo"
+ symbol = loop.kernel.last_cell_all_colours_symbol
+ assert symbol.name == "last_edge_cell_all_colours"
+ assert len(symbol.datatype.shape) == 1 # It's a 1-dimensional array
def test_kern_last_cell_all_colours_intergrid():
@@ -470,3 +476,16 @@ def test_undf_name():
kern = sched.walk(LFRicKern)[0]
assert kern.undf_name == "undf_w1"
+
+
+def test_argument_kinds():
+ ''' Test the LFRicKern.argument_kinds property. '''
+ _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
+ api=TEST_API)
+ psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
+ sched = psy.invokes.invoke_list[0].schedule
+ kern = sched.walk(LFRicKern)[0]
+
+ assert len(kern.argument_kinds) == 2
+ assert "i_def" in kern.argument_kinds
+ assert "r_def" in kern.argument_kinds
diff --git a/src/psyclone/tests/domain/lfric/lfric_loop_bounds_test.py b/src/psyclone/tests/domain/lfric/lfric_loop_bounds_test.py
index d48713167f..3b5fb89de7 100644
--- a/src/psyclone/tests/domain/lfric/lfric_loop_bounds_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_loop_bounds_test.py
@@ -40,7 +40,6 @@
import os
from psyclone.domain.lfric import LFRicLoopBounds
-from psyclone.f2pygen import SubroutineGen, ModuleGen
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSyFactory
from psyclone.psyir import symbols
@@ -65,38 +64,32 @@ def test_lbounds_construction():
assert isinstance(lbounds, LFRicLoopBounds)
-def test_lbounds_initialise(monkeypatch):
+def test_lbounds_initialise(monkeypatch, fortran_writer):
''' Test the initialise method of LFRicLoopBounds. '''
_, invoke_info = parse(os.path.join(BASE_PATH,
"1.0.1_single_named_invoke.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
invoke = psy.invokes.invoke_list[0]
- mod = ModuleGen()
- fake_parent = SubroutineGen(mod)
lbounds = LFRicLoopBounds(invoke)
table = invoke.schedule.symbol_table
- assert "loop0_start" not in table
- assert "loop0_stop" not in table
- lbounds.initialise(fake_parent)
+ lbounds.initialise(0)
- # Check that new symbols have been added.
+ # Check that new symbols exist
start_sym = table.lookup("loop0_start")
assert start_sym.datatype.intrinsic == symbols.ScalarType.Intrinsic.INTEGER
stop_sym = table.lookup("loop0_stop")
assert stop_sym.datatype.intrinsic == symbols.ScalarType.Intrinsic.INTEGER
- assert "Set-up all of the loop bounds" in str(fake_parent.children[1].root)
+ assert "Set-up all of the loop bounds" in fortran_writer(invoke.schedule)
# Monkeypatch the schedule so that it appears to have no loops.
monkeypatch.setattr(invoke.schedule, "loops", lambda: [])
lbounds = LFRicLoopBounds(invoke)
- fake_parent = SubroutineGen(mod)
# The initialise() should not raise an error but nothing should be
- # added to the f2pygen tree.
- lbounds.initialise(fake_parent)
- assert fake_parent.children == []
+ # added to the PSyIR tree.
+ lbounds.initialise(0)
# Symbols representing loop bounds should be unaffected.
assert table.lookup("loop0_start") is start_sym
assert table.lookup("loop0_stop") is stop_sym
diff --git a/src/psyclone/tests/domain/lfric/lfric_loop_test.py b/src/psyclone/tests/domain/lfric/lfric_loop_test.py
index 0fa6f2d537..eecb4f6c76 100644
--- a/src/psyclone/tests/domain/lfric/lfric_loop_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_loop_test.py
@@ -47,19 +47,17 @@
from psyclone.configuration import Config
from psyclone.core import AccessType
from psyclone.domain.lfric import (LFRicConstants, LFRicSymbolTable,
- LFRicKern, LFRicKernMetadata, LFRicLoop)
+ LFRicKern, LFRicKernMetadata, LFRicLoop,
+ LFRicInvokeSchedule)
from psyclone.errors import GenerationError, InternalError
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSyFactory
-from psyclone.psyir.nodes import (ArrayReference, Call, Literal, Reference,
- Schedule, ScopingNode, Loop)
+from psyclone.psyir.nodes import Call, ScopingNode, Loop
from psyclone.psyir.tools import DependencyTools
from psyclone.psyir.tools.dependency_tools import Message, DTCode
from psyclone.tests.lfric_build import LFRicBuild
from psyclone.tests.utilities import get_invoke
-from psyclone.transformations import (Dynamo0p3ColourTrans,
- DynamoOMPParallelLoopTrans,
- Dynamo0p3RedundantComputationTrans)
+from psyclone.transformations import Dynamo0p3ColourTrans
BASE_PATH = os.path.join(
os.path.dirname(os.path.dirname(os.path.dirname(
@@ -68,6 +66,28 @@
TEST_API = "lfric"
+def test_constructor_loop_bound_names():
+ ''' Check that the constructor creates the appropriate loop bound
+ references (with names with a sequentially ascending index)
+ '''
+ with pytest.raises(InternalError) as err:
+ _ = LFRicLoop(loop_type="null")
+ assert ("LFRic loops must be inside an InvokeSchedule, a parent "
+ "argument is mandatory when they are created." in str(err.value))
+
+ schedule = LFRicInvokeSchedule.create("test")
+ schedule.addchild(LFRicLoop(parent=schedule))
+ schedule.addchild(LFRicLoop(parent=schedule))
+ schedule.addchild(LFRicLoop(parent=schedule))
+ loops = schedule.loops()
+ assert loops[0].start_expr.name == "loop0_start"
+ assert loops[1].start_expr.name == "loop1_start"
+ assert loops[2].start_expr.name == "loop2_start"
+ assert loops[0].stop_expr.name == "loop0_stop"
+ assert loops[1].stop_expr.name == "loop1_stop"
+ assert loops[2].stop_expr.name == "loop2_stop"
+
+
def test_constructor_invalid_loop_type(monkeypatch):
''' Check that the constructor raises the expected errors when an invalid
loop type is specified.
@@ -97,7 +117,7 @@ def test_set_lower_bound_functions(monkeypatch):
# TODO #1954: Remove the protected access using a factory
monkeypatch.setattr(ScopingNode, "_symbol_table_class",
LFRicSymbolTable)
- schedule = Schedule()
+ schedule = LFRicInvokeSchedule.create("test")
my_loop = LFRicLoop(parent=schedule)
schedule.children = [my_loop]
with pytest.raises(GenerationError) as excinfo:
@@ -118,7 +138,7 @@ def test_set_upper_bound_functions(monkeypatch):
# TODO #1954: Remove the protected access using a factory
monkeypatch.setattr(ScopingNode, "_symbol_table_class",
LFRicSymbolTable)
- schedule = Schedule()
+ schedule = LFRicInvokeSchedule.create("test")
my_loop = LFRicLoop(parent=schedule)
schedule.children = [my_loop]
with pytest.raises(GenerationError) as excinfo:
@@ -138,8 +158,8 @@ def test_set_upper_bound_functions(monkeypatch):
in str(excinfo.value))
-def test_lower_bound_fortran_1():
- ''' Tests we raise an exception in the LFRicLoop:_lower_bound_fortran()
+def test_lower_bound_psyir_1():
+ ''' Tests we raise an exception in the LFRicLoop:lower_bound_psyir()
method - first GenerationError.
'''
@@ -149,13 +169,13 @@ def test_lower_bound_fortran_1():
my_loop = psy.invokes.invoke_list[0].schedule.children[0]
my_loop.set_lower_bound("inner", index=1)
with pytest.raises(GenerationError) as excinfo:
- _ = my_loop._lower_bound_fortran()
+ _ = my_loop.lower_bound_psyir()
assert ("lower bound must be 'start' if we are sequential" in
str(excinfo.value))
-def test_lower_bound_fortran_2(monkeypatch):
- ''' Tests we raise an exception in the LFRicLoop:_lower_bound_fortran()
+def test_lower_bound_psyir_2(monkeypatch):
+ ''' Tests we raise an exception in the LFRicLoop:lower_bound_psyir()
method - second GenerationError.
'''
@@ -167,7 +187,7 @@ def test_lower_bound_fortran_2(monkeypatch):
# checks for valid input
monkeypatch.setattr(my_loop, "_lower_bound_name", value="invalid")
with pytest.raises(GenerationError) as excinfo:
- _ = my_loop._lower_bound_fortran()
+ _ = my_loop.lower_bound_psyir()
assert ("Unsupported lower bound name 'invalid' found" in
str(excinfo.value))
@@ -177,8 +197,8 @@ def test_lower_bound_fortran_2(monkeypatch):
("ncells", 10, "inner_cell(1)"),
("cell_halo", 1, "ncells_cell()"),
("cell_halo", 10, "cell_halo_cell(9)")])
-def test_lower_bound_fortran_3(monkeypatch, name, index, output):
- ''' Test '_lower_bound_fortran()' with multiple valid iteration spaces.
+def test_lower_bound_psyir_3(monkeypatch, name, index, output):
+ ''' Test 'lower_bound_psyir()' with multiple valid iteration spaces.
'''
_, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
@@ -189,7 +209,8 @@ def test_lower_bound_fortran_3(monkeypatch, name, index, output):
# checks for valid input
monkeypatch.setattr(my_loop, "_lower_bound_name", value=name)
monkeypatch.setattr(my_loop, "_lower_bound_index", value=index)
- assert my_loop._lower_bound_fortran() == "mesh%get_last_" + output + "+1"
+ expected = "mesh%get_last_" + output + " + 1"
+ assert my_loop.lower_bound_psyir().debug_string() == expected
def test_mesh_name():
@@ -237,14 +258,6 @@ def test_lower_to_language_normal_loop():
loop1 = sched.children[1]
assert loop1.start_expr.symbol.name == "loop1_start"
- # Now remove loop 0, and verify that the start variable symbol has changed
- # (which is a problem in case of driver creation, since the symbol names
- # written in the full code can then be different from the symbols used
- # in the driver). TODO #1731 might fix this, in which case this test
- # will fail (and the whole lowering of LFRicLoop can likely be removed).
- sched.children.pop(0)
- assert loop1.start_expr.symbol.name == "loop0_start"
-
# The same test with the lowered schedule should not change the
# symbol anymore:
_, invoke = get_invoke("4.8_multikernel_invokes.f90", TEST_API,
@@ -298,245 +311,6 @@ def test_lower_to_language_domain_loops_multiple_statements():
"children is not yet supported, but found:" in str(err.value))
-def test_upper_bound_fortran_1():
- ''' Tests we raise an exception in the LFRicLoop:_upper_bound_fortran()
- method when 'cell_halo', 'dof_halo' or 'inner' are used.
-
- '''
- _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
- api=TEST_API)
- psy = PSyFactory(TEST_API, distributed_memory=False).create(invoke_info)
- my_loop = psy.invokes.invoke_list[0].schedule.children[0]
- for option in ["cell_halo", "dof_halo", "inner"]:
- my_loop.set_upper_bound(option, halo_depth=1)
- with pytest.raises(GenerationError) as excinfo:
- _ = my_loop._upper_bound_fortran()
- assert (
- f"'{option}' is not a valid loop upper bound for sequential/"
- f"shared-memory code" in str(excinfo.value))
-
-
-def test_upper_bound_fortran_2(monkeypatch):
- ''' Tests we raise an exception in the LFRicLoop:_upper_bound_fortran()
- method if an invalid value is provided.
-
- '''
- _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
- api=TEST_API)
- psy = PSyFactory(TEST_API, distributed_memory=False).create(invoke_info)
- my_loop = psy.invokes.invoke_list[0].schedule.children[0]
- monkeypatch.setattr(my_loop, "_upper_bound_name", value="invalid")
- with pytest.raises(GenerationError) as excinfo:
- _ = my_loop._upper_bound_fortran()
- assert (
- "Unsupported upper bound name 'invalid' found" in str(excinfo.value))
- # Pretend the loop is over colours and does not contain a kernel
- monkeypatch.setattr(my_loop, "_upper_bound_name", value="ncolours")
- monkeypatch.setattr(my_loop, "walk", lambda x: [])
- with pytest.raises(InternalError) as excinfo:
- _ = my_loop._upper_bound_fortran()
- assert ("Failed to find a kernel within a loop over colours"
- in str(excinfo.value))
-
-
-def test_upper_bound_inner(monkeypatch):
- ''' Check that we get the correct Fortran generated if a loop's upper
- bound is 'inner'.
-
- '''
- _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
- api=TEST_API)
- psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
- my_loop = psy.invokes.invoke_list[0].schedule.children[4]
- monkeypatch.setattr(my_loop, "_upper_bound_name", value="inner")
- ubound = my_loop._upper_bound_fortran()
- assert ubound == "mesh%get_last_inner_cell(1)"
-
-
-def test_upper_bound_ncolour(dist_mem):
- ''' Check that we get the correct Fortran for the upper bound of a
- coloured loop.
-
- '''
- _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
- api=TEST_API)
- psy = PSyFactory(TEST_API, distributed_memory=dist_mem).create(invoke_info)
- sched = psy.invokes.invoke_list[0].schedule
- loops = sched.walk(LFRicLoop)
- # Apply a colouring transformation to the loop.
- trans = Dynamo0p3ColourTrans()
- trans.apply(loops[0])
- loops = sched.walk(LFRicLoop)
- if dist_mem:
- assert loops[1]._upper_bound_name == "colour_halo"
- assert (loops[1]._upper_bound_fortran() ==
- "last_halo_cell_all_colours(colour, 1)")
- # Apply redundant computation to increase the depth of the access
- # to the halo.
- rtrans = Dynamo0p3RedundantComputationTrans()
- rtrans.apply(loops[1])
- assert (loops[1]._upper_bound_fortran() ==
- "last_halo_cell_all_colours(colour, max_halo_depth_mesh)")
- else:
- assert loops[1]._upper_bound_name == "ncolour"
- assert (loops[1]._upper_bound_fortran() ==
- "last_edge_cell_all_colours(colour)")
-
-
-def test_upper_bound_ncolour_intergrid(dist_mem):
- ''' Check that we get the correct Fortran for a coloured loop's upper bound
- if it contains an inter-grid kernel.
-
- '''
- _, invoke_info = parse(os.path.join(BASE_PATH,
- "22.1_intergrid_restrict.f90"),
- api=TEST_API)
- psy = PSyFactory(TEST_API, distributed_memory=dist_mem).create(invoke_info)
- sched = psy.invokes.invoke_list[0].schedule
- loops = sched.walk(LFRicLoop)
- # Apply a colouring transformation to the loop.
- trans = Dynamo0p3ColourTrans()
- trans.apply(loops[0])
- loops = sched.walk(LFRicLoop)
- if dist_mem:
- assert loops[1]._upper_bound_name == "colour_halo"
- assert (loops[1]._upper_bound_fortran() ==
- "last_halo_cell_all_colours_field1(colour, 1)")
- # We can't apply redundant computation to increase the depth of the
- # access to the halo as it is not supported for inter-grid kernels.
- # Therefore we manually unset the upper bound halo depth to indicate
- # that we access the full depth.
- loops[1]._upper_bound_halo_depth = None
- assert (loops[1]._upper_bound_fortran() ==
- "last_halo_cell_all_colours_field1(colour, "
- "max_halo_depth_mesh_field1)")
- else:
- assert loops[1]._upper_bound_name == "ncolour"
- assert (loops[1]._upper_bound_fortran() ==
- "last_edge_cell_all_colours_field1(colour)")
-
-
-def test_loop_start_expr(dist_mem):
- ''' Test that the 'start_expr' property returns the expected reference
- to a symbol.
-
- '''
- _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
- api=TEST_API)
- psy = PSyFactory(TEST_API, distributed_memory=dist_mem).create(invoke_info)
- # TODO #1010. Replace this psy.gen with a call to lower_to_language_level()
- # pylint: disable=pointless-statement
- psy.gen
- sched = psy.invokes.invoke_list[0].schedule
- loops = sched.walk(LFRicLoop)
- lbound = loops[0].start_expr
- assert isinstance(lbound, Reference)
- assert lbound.symbol.name == "loop0_start"
-
-
-def test_loop_stop_expr(dist_mem):
- ''' Test the 'stop_expr' property of a loop with and without colouring.
-
- '''
- _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
- api=TEST_API)
- psy = PSyFactory(TEST_API, distributed_memory=dist_mem).create(invoke_info)
- # TODO #1010. Replace this psy.gen with a call to lower_to_language_level()
- # pylint: disable=pointless-statement
- psy.gen
- sched = psy.invokes.invoke_list[0].schedule
- loops = sched.walk(LFRicLoop)
- ubound = loops[0].stop_expr
- assert isinstance(ubound, Reference)
- assert ubound.symbol.name == "loop0_stop"
- # Apply a colouring transformation to the loop.
- trans = Dynamo0p3ColourTrans()
- trans.apply(loops[0])
- # TODO #1010. Replace this psy.gen with a call to lower_to_language_level()
- psy.gen
- sched = psy.invokes.invoke_list[0].schedule
- loops = sched.walk(LFRicLoop)
- ubound = loops[1].stop_expr
- assert isinstance(ubound, ArrayReference)
- assert ubound.indices[0].name == "colour"
- if dist_mem:
- assert ubound.symbol.name == "last_halo_cell_all_colours"
- assert isinstance(ubound.indices[1], Literal)
- assert ubound.indices[1].value == "1"
- # Alter the loop so that it goes to the full halo depth
- loops[1]._upper_bound_halo_depth = None
- ubound = loops[1].stop_expr
- assert isinstance(ubound.indices[1], Reference)
- assert ubound.indices[1].symbol.name == "max_halo_depth_mesh"
- else:
- assert ubound.symbol.name == "last_edge_cell_all_colours"
-
-
-def test_loop_stop_expr_intergrid(dist_mem):
- ''' Test the 'stop_expr' property for a loop containing an
- inter-grid kernel.
-
- '''
- _, invoke_info = parse(os.path.join(BASE_PATH,
- "22.1_intergrid_restrict.f90"),
- api=TEST_API)
- psy = PSyFactory(TEST_API, distributed_memory=dist_mem).create(invoke_info)
- # TODO #1010. Replace this psy.gen with a call to lower_to_language_level()
- # pylint: disable=pointless-statement
- psy.gen
- sched = psy.invokes.invoke_list[0].schedule
- loops = sched.walk(LFRicLoop)
- ubound = loops[0].stop_expr
- assert isinstance(ubound, Reference)
- assert ubound.symbol.name == "loop0_stop"
- # Apply a colouring transformation to the loop.
- trans = Dynamo0p3ColourTrans()
- trans.apply(loops[0])
- # TODO #1010. Replace this psy.gen with a call to lower_to_language_level()
- psy.gen
- sched = psy.invokes.invoke_list[0].schedule
- loops = sched.walk(LFRicLoop)
- ubound = loops[1].stop_expr
- assert isinstance(ubound, ArrayReference)
- assert ubound.indices[0].name == "colour"
- if dist_mem:
- assert ubound.symbol.name == "last_halo_cell_all_colours_field1"
- assert isinstance(ubound.indices[1], Literal)
- assert ubound.indices[1].value == "1"
- # Alter the loop so that it goes to the full halo depth
- loops[1]._upper_bound_halo_depth = None
- ubound = loops[1].stop_expr
- assert isinstance(ubound.indices[1], Reference)
- assert ubound.indices[1].symbol.name == "max_halo_depth_mesh_field1"
- else:
- assert ubound.symbol.name == "last_edge_cell_all_colours_field1"
-
-
-def test_lfricloop_gen_code_err():
- ''' Test that the 'gen_code' method raises the expected exception if the
- loop type is 'colours' and is within an OpenMP parallel region.
-
- '''
- _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
- api=TEST_API)
- psy = PSyFactory(TEST_API, distributed_memory=False).create(invoke_info)
- sched = psy.invokes.invoke_list[0].schedule
- loops = sched.walk(LFRicLoop)
- # Apply a colouring transformation to the loop.
- trans = Dynamo0p3ColourTrans()
- trans.apply(loops[0])
- loops = sched.walk(LFRicLoop)
- # Parallelise the inner loop (over cells of a given colour)
- trans = DynamoOMPParallelLoopTrans()
- trans.apply(loops[1])
- # Alter the loop type manually
- loops[1]._loop_type = "colours"
- with pytest.raises(GenerationError) as err:
- loops[1].gen_code(None)
- assert ("Cannot have a loop over colours within an OpenMP parallel region"
- in str(err.value))
-
-
def test_lfricloop_load_unexpected_func_space():
''' The load function of an instance of the LFRicLoop class raises an
error if an unexpected function space is found. This test makes
@@ -724,14 +498,13 @@ def test_itn_space_write_w2broken_w1(dist_mem, tmpdir):
if dist_mem:
assert "loop0_stop = mesh%get_last_halo_cell(1)\n" in generated_code
output = (
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " do cell = loop0_start, loop0_stop, 1\n")
assert output in generated_code
else:
assert "loop0_stop = m2_proxy%vspace%get_ncell()\n" in generated_code
output = (
- " ! Call our kernels\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " ! Call kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
assert output in generated_code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -756,16 +529,14 @@ def test_itn_space_fld_and_op_writers(tmpdir):
assert ("loop0_stop = mesh%get_last_halo_cell(1)\n" in
generated_code)
output = (
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " do cell = loop0_start, loop0_stop, 1\n")
assert output in generated_code
else:
assert ("loop0_stop = op1_proxy%fs_from%get_ncell()\n" in
generated_code)
output = (
- " ! Call our kernels\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1")
+ " ! Call kernels\n"
+ " do cell = loop0_start, loop0_stop, 1")
assert output in generated_code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -792,14 +563,13 @@ def test_itn_space_any_any_discontinuous(dist_mem, tmpdir):
if dist_mem:
assert "loop0_stop = mesh%get_last_halo_cell(1)" in generated_code
output = (
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " do cell = loop0_start, loop0_stop, 1\n")
assert output in generated_code
else:
assert "loop0_stop = f1_proxy%vspace%get_ncell()" in generated_code
output = (
- " ! Call our kernels\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " ! Call kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
assert output in generated_code
@@ -823,7 +593,7 @@ def test_itn_space_any_w2trace(dist_mem, tmpdir):
if dist_mem:
assert "loop0_stop = mesh%get_last_halo_cell(1)\n" in generated_code
output = (
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " do cell = loop0_start, loop0_stop, 1\n")
assert output in generated_code
else:
# Loop upper bound should use f2 as that field is *definitely*
@@ -831,9 +601,8 @@ def test_itn_space_any_w2trace(dist_mem, tmpdir):
# that might be).
assert "loop0_stop = f2_proxy%vspace%get_ncell()" in generated_code
output = (
- " ! Call our kernels\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " ! Call kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
assert output in generated_code
@@ -882,12 +651,12 @@ def test_halo_for_discontinuous(tmpdir, monkeypatch, annexed):
if annexed:
assert "halo_exchange" not in result
else:
- assert "IF (f1_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL f1_proxy%halo_exchange(depth=1)" in result
- assert "IF (f2_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL f2_proxy%halo_exchange(depth=1)" in result
- assert "IF (m1_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL m1_proxy%halo_exchange(depth=1)" in result
+ assert "if (f1_proxy%is_dirty(depth=1)) then" in result
+ assert "call f1_proxy%halo_exchange(depth=1)" in result
+ assert "if (f2_proxy%is_dirty(depth=1)) then" in result
+ assert "call f2_proxy%halo_exchange(depth=1)" in result
+ assert "if (m1_proxy%is_dirty(depth=1)) then" in result
+ assert "call m1_proxy%halo_exchange(depth=1)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -917,12 +686,12 @@ def test_halo_for_discontinuous_2(tmpdir, monkeypatch, annexed):
if annexed:
assert "halo_exchange" not in result
else:
- assert "IF (f1_proxy%is_dirty(depth=1)) THEN" not in result
- assert "CALL f1_proxy%halo_exchange(depth=1)" in result
- assert "IF (f2_proxy%is_dirty(depth=1)) THEN" not in result
- assert "CALL f2_proxy%halo_exchange(depth=1)" in result
- assert "IF (m1_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL m1_proxy%halo_exchange(depth=1)" in result
+ assert "if (f1_proxy%is_dirty(depth=1)) then" not in result
+ assert "call f1_proxy%halo_exchange(depth=1)" in result
+ assert "if (f2_proxy%is_dirty(depth=1)) then" not in result
+ assert "call f2_proxy%halo_exchange(depth=1)" in result
+ assert "if (m1_proxy%is_dirty(depth=1)) then" in result
+ assert "call m1_proxy%halo_exchange(depth=1)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -971,7 +740,7 @@ def test_null_loop():
check in the 'load()' method behaves as expected.
'''
- loop = LFRicLoop(loop_type="null")
+ loop = LFRicLoop(loop_type="null", parent=LFRicInvokeSchedule.create("a"))
assert loop.loop_type == "null"
assert loop.node_str(colour=False) == "Loop[type='null']"
@@ -1006,7 +775,7 @@ def test_loop_independent_iterations(monkeypatch, dist_mem):
'''Tests for the independent_iterations() method.'''
# A 'null' loop cannot be parallelised (because there's nothing to
# parallelise).
- loop = LFRicLoop(loop_type="null")
+ loop = LFRicLoop(loop_type="null", parent=LFRicInvokeSchedule.create("a"))
assert not loop.independent_iterations()
# A loop over all columns that contains a kernel that increments a field
# on a continuous function space does not have independent iterations.
@@ -1086,3 +855,59 @@ def test_dof_loop_independent_iterations(monkeypatch, dist_mem):
lambda _1: [Message("just a test",
DTCode.WARN_SCALAR_REDUCTION)])
assert loop.independent_iterations()
+
+
+def test_upper_bound_fortran_invalid_bound():
+ ''' Tests we raise an exception in the LFRicLoop:_upper_bound_fortran()
+ method when 'cell_halo', 'dof_halo' or 'inner' are used.
+
+ '''
+ _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
+ api=TEST_API)
+ psy = PSyFactory(TEST_API, distributed_memory=False).create(invoke_info)
+ my_loop = psy.invokes.invoke_list[0].schedule.children[0]
+ for option in ["cell_halo", "dof_halo", "inner"]:
+ my_loop.set_upper_bound(option, halo_depth=1)
+ with pytest.raises(GenerationError) as excinfo:
+ _ = my_loop.upper_bound_psyir()
+ assert (
+ f"'{option}' is not a valid loop upper bound for sequential/"
+ f"shared-memory code" in str(excinfo.value))
+
+
+def test_upper_bound_fortran_invalid_within_colouring(monkeypatch):
+ ''' Tests we raise an exception in the LFRicLoop:_upper_bound_fortran()
+ method if an invalid value is provided.
+
+ '''
+ _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
+ api=TEST_API)
+ psy = PSyFactory(TEST_API, distributed_memory=False).create(invoke_info)
+ my_loop = psy.invokes.invoke_list[0].schedule.children[0]
+ monkeypatch.setattr(my_loop, "_upper_bound_name", value="invalid")
+ with pytest.raises(GenerationError) as excinfo:
+ _ = my_loop.upper_bound_psyir()
+ assert (
+ "Unsupported upper bound name 'invalid' found" in str(excinfo.value))
+ # Pretend the loop is over colours and does not contain a kernel
+ monkeypatch.setattr(my_loop, "_upper_bound_name", value="ncolours")
+ monkeypatch.setattr(my_loop, "walk", lambda x: [])
+ with pytest.raises(InternalError) as excinfo:
+ _ = my_loop.upper_bound_psyir()
+ assert ("Failed to find a kernel within a loop over colours"
+ in str(excinfo.value))
+
+
+def test_upper_bound_psyir_inner(monkeypatch):
+ ''' Check that we get the correct Fortran generated if a loop's upper
+ bound is 'inner'. There are no transformations that allow this
+ configuration, so we need to patch the value.
+
+ '''
+ _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
+ api=TEST_API)
+ psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
+ my_loop = psy.invokes.invoke_list[0].schedule.children[4]
+ monkeypatch.setattr(my_loop, "_upper_bound_name", value="inner")
+ ubound = my_loop.upper_bound_psyir()
+ assert "mesh%get_last_inner_cell(1)" in ubound.debug_string()
diff --git a/src/psyclone/tests/domain/lfric/lfric_mesh_props_stubgen_test.py b/src/psyclone/tests/domain/lfric/lfric_mesh_props_stubgen_test.py
index 1d23921e8d..ab0cd1662c 100644
--- a/src/psyclone/tests/domain/lfric/lfric_mesh_props_stubgen_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_mesh_props_stubgen_test.py
@@ -79,7 +79,7 @@
'''
-def test_mesh_prop_stub_gen():
+def test_mesh_prop_stub_gen(fortran_writer):
''' Check that correct kernel stub code is produced when the kernel
metadata contains a mesh property. '''
ast = fpapi.parse(os.path.join(BASE_PATH,
@@ -88,32 +88,34 @@ def test_mesh_prop_stub_gen():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- gen = str(kernel.gen_stub).lower()
-
- output = (
- " module testkern_mesh_prop_mod\n"
- " implicit none\n"
- " contains\n"
- " subroutine testkern_mesh_prop_code(nlayers, rscalar_1, "
- "field_2_w1, ndf_w1, undf_w1, map_w1, nfaces_re_h, adjacent_face)\n"
- " use constants_mod\n"
- " implicit none\n"
- " integer(kind=i_def), intent(in) :: nlayers\n"
- " integer(kind=i_def), intent(in) :: ndf_w1\n"
- " integer(kind=i_def), intent(in), dimension(ndf_w1) :: map_w1\n"
- " integer(kind=i_def), intent(in) :: undf_w1\n"
- " real(kind=r_def), intent(in) :: rscalar_1\n"
- " real(kind=r_def), intent(inout), dimension(undf_w1) :: "
- "field_2_w1\n"
- " integer(kind=i_def), intent(in) :: nfaces_re_h\n"
- " integer(kind=i_def), intent(in), dimension(nfaces_re_h) :: "
- "adjacent_face\n"
- " end subroutine testkern_mesh_prop_code\n"
- " end module testkern_mesh_prop_mod")
- assert output in gen
-
-
-def test_mesh_props_quad_stub_gen():
+ gen = fortran_writer(kernel.gen_stub)
+
+ assert """\
+module testkern_mesh_prop_mod
+ implicit none
+ public
+
+ contains
+ subroutine testkern_mesh_prop_code(nlayers, rscalar_1, field_2_w1, ndf_w1, \
+undf_w1, map_w1, nfaces_re_h, adjacent_face)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w1
+ integer(kind=i_def), dimension(ndf_w1), intent(in) :: map_w1
+ integer(kind=i_def), intent(in) :: undf_w1
+ real(kind=r_def), intent(in) :: rscalar_1
+ real(kind=r_def), dimension(undf_w1), intent(inout) :: field_2_w1
+ integer(kind=i_def), intent(in) :: nfaces_re_h
+ integer(kind=i_def), dimension(nfaces_re_h), intent(in) :: adjacent_face
+
+
+ end subroutine testkern_mesh_prop_code
+
+end module testkern_mesh_prop_mod
+""" == gen
+
+
+def test_mesh_props_quad_stub_gen(fortran_writer):
''' Check that correct stub code is produced when the kernel metadata
specifies both mesh and quadrature properties (quadrature
properties should be placed at the end of subroutine argument list). '''
@@ -121,35 +123,48 @@ def test_mesh_props_quad_stub_gen():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- gen = str(kernel.gen_stub)
-
- output1 = (
- " SUBROUTINE testkern_mesh_prop_quad_code(nlayers, field_1_w1, "
- "field_2_wtheta, ndf_w1, undf_w1, map_w1, basis_w1_qr_xyoz, "
- "ndf_wtheta, undf_wtheta, map_wtheta, basis_wtheta_qr_xyoz, "
- "nfaces_re_h, nfaces_re, normals_to_horiz_faces, "
- "out_normals_to_faces, adjacent_face, np_xy_qr_xyoz, "
- "np_z_qr_xyoz, weights_xy_qr_xyoz, weights_z_qr_xyoz)")
- assert output1 in gen
- output2 = (
- " INTEGER(KIND=i_def), intent(in) :: np_xy_qr_xyoz, "
- "np_z_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), "
- "dimension(3,ndf_w1,np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w1_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), "
- "dimension(1,ndf_wtheta,np_xy_qr_xyoz,np_z_qr_xyoz) :: "
- "basis_wtheta_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_xy_qr_xyoz) :: "
- "weights_xy_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_z_qr_xyoz) :: "
- "weights_z_qr_xyoz\n"
- " INTEGER(KIND=i_def), intent(in) :: nfaces_re_h\n"
- " INTEGER(KIND=i_def), intent(in) :: nfaces_re\n"
- " REAL(KIND=r_def), intent(in), dimension(3,nfaces_re_h) :: "
- "normals_to_horiz_faces\n"
- " REAL(KIND=r_def), intent(in), dimension(3,nfaces_re) :: "
- "out_normals_to_faces\n"
- " INTEGER(KIND=i_def), intent(in), dimension(nfaces_re_h) :: "
- "adjacent_face\n"
- )
- assert output2 in gen
+ gen = fortran_writer(kernel.gen_stub)
+
+ assert """\
+module testkern_mesh_prop_quad_mod
+ implicit none
+ public
+
+ contains
+ subroutine testkern_mesh_prop_quad_code(nlayers, field_1_w1, \
+field_2_wtheta, ndf_w1, undf_w1, map_w1, basis_w1_qr_xyoz, ndf_wtheta, \
+undf_wtheta, map_wtheta, basis_wtheta_qr_xyoz, nfaces_re_h, nfaces_re, \
+normals_to_horiz_faces, out_normals_to_faces, adjacent_face, np_xy_qr_xyoz, \
+np_z_qr_xyoz, weights_xy_qr_xyoz, weights_z_qr_xyoz)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w1
+ integer(kind=i_def), dimension(ndf_w1), intent(in) :: map_w1
+ integer(kind=i_def), intent(in) :: ndf_wtheta
+ integer(kind=i_def), dimension(ndf_wtheta), intent(in) :: map_wtheta
+ integer(kind=i_def), intent(in) :: undf_w1
+ integer(kind=i_def), intent(in) :: undf_wtheta
+ real(kind=r_def), dimension(undf_w1), intent(in) :: field_1_w1
+ real(kind=r_def), dimension(undf_wtheta), intent(inout) :: field_2_wtheta
+ integer(kind=i_def), intent(in) :: np_xy_qr_xyoz
+ integer(kind=i_def), intent(in) :: np_z_qr_xyoz
+ real(kind=r_def), dimension(3,ndf_w1,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w1_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_wtheta,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_wtheta_qr_xyoz
+ real(kind=r_def), dimension(np_xy_qr_xyoz), intent(in) :: \
+weights_xy_qr_xyoz
+ real(kind=r_def), dimension(np_z_qr_xyoz), intent(in) :: weights_z_qr_xyoz
+ integer(kind=i_def), intent(in) :: nfaces_re_h
+ integer(kind=i_def), intent(in) :: nfaces_re
+ real(kind=r_def), dimension(3,nfaces_re_h), intent(in) :: \
+normals_to_horiz_faces
+ real(kind=r_def), dimension(3,nfaces_re), intent(in) :: \
+out_normals_to_faces
+ integer(kind=i_def), dimension(nfaces_re_h), intent(in) :: adjacent_face
+
+
+ end subroutine testkern_mesh_prop_quad_code
+
+end module testkern_mesh_prop_quad_mod
+""" == gen
diff --git a/src/psyclone/tests/domain/lfric/lfric_mesh_props_test.py b/src/psyclone/tests/domain/lfric/lfric_mesh_props_test.py
index bc2ea9d997..8bfdc724e1 100644
--- a/src/psyclone/tests/domain/lfric/lfric_mesh_props_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_mesh_props_test.py
@@ -46,7 +46,6 @@
from psyclone.domain.lfric import LFRicKernMetadata
from psyclone.dynamo0p3 import LFRicMeshProperties, MeshProperty
from psyclone.errors import InternalError
-from psyclone.f2pygen import ModuleGen
from psyclone.parse.algorithm import parse
from psyclone.parse.utils import ParseError
from psyclone.psyGen import PSyFactory, Kern
@@ -170,7 +169,7 @@ def test_mesh_properties():
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=False).create(invoke_info)
invoke = psy.invokes.invoke_list[0]
- # Check that the kern_args() and _stub_declarations() methods raise the
+ # Check that the kern_args() and stub_declarations() methods raise the
# expected error if the LFRicMeshProperties class has been created for
# an Invoke.
with pytest.raises(InternalError) as err:
@@ -178,19 +177,15 @@ def test_mesh_properties():
assert ("only be called when LFRicMeshProperties has been instantiated "
"for a kernel" in str(err.value))
with pytest.raises(InternalError) as err:
- invoke.mesh_properties._stub_declarations(None)
- assert ("cannot be called because LFRicMeshProperties has been "
- "instantiated for an invoke and not a kernel" in str(err.value))
+ invoke.mesh_properties.stub_declarations()
+ assert ("stub_declarations() can only be called with a LFRicMeshProperties"
+ " instantiated for a kernel (not an invoke)." in str(err.value))
# Break the list of mesh properties
invoke.mesh_properties._properties.append("not-a-property")
with pytest.raises(InternalError) as err:
- invoke.mesh_properties._invoke_declarations(ModuleGen("test_mod"))
+ invoke.mesh_properties.invoke_declarations()
assert ("Found unsupported mesh property 'not-a-property' when "
"generating invoke declarations. Only " in str(err.value))
- with pytest.raises(InternalError) as err:
- invoke.mesh_properties.initialise(ModuleGen("test_mod"))
- assert ("Found unsupported mesh property 'not-a-property' when generating"
- " initialisation code" in str(err.value))
sched = invoke.schedule
# Get hold of the Kernel object
kernel = sched.walk(Kern)[0]
@@ -208,11 +203,12 @@ def test_mesh_properties():
"Only members of the MeshProperty Enum are"
in str(err.value))
with pytest.raises(InternalError) as err:
- mesh_props._invoke_declarations(ModuleGen("test_mod"))
- assert ("cannot be called because LFRicMeshProperties has been "
- "instantiated for a kernel and not an invoke." in str(err.value))
+ mesh_props.invoke_declarations()
+ assert ("invoke_declarations() can only be called with a LFRicMesh"
+ "Properties instantiated for an invoke (not a kernel)."
+ in str(err.value))
with pytest.raises(InternalError) as err:
- mesh_props._stub_declarations(ModuleGen("test_mod"))
+ mesh_props.stub_declarations()
assert ("Found unsupported mesh property 'not-a-property' when "
"generating declarations for kernel stub. Only " in str(err.value))
@@ -231,8 +227,8 @@ def test_mesh_gen(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
gen = str(psy.gen).lower()
# In order to provide the mesh property we need the reference element
- assert "use reference_element_mod, only: reference_element_type" in gen
- assert "integer(kind=i_def) nfaces_re_h" in gen
+ assert "use reference_element_mod, only : reference_element_type" in gen
+ assert "integer(kind=i_def) :: nfaces_re_h" in gen
assert ("integer(kind=i_def), pointer :: adjacent_face(:,:) => null()"
in gen)
assert ("class(reference_element_type), pointer :: reference_element "
@@ -288,12 +284,12 @@ def test_mesh_prop_plus_ref_elem_gen(tmpdir):
gen = str(psy.gen).lower()
assert (
- " reference_element => mesh%get_reference_element()\n"
- " nfaces_re_h = reference_element%get_number_horizontal_faces()\n"
- " nfaces_re_v = reference_element%get_number_vertical_faces()\n"
- " call reference_element%get_normals_to_horizontal_faces("
+ " reference_element => mesh%get_reference_element()\n"
+ " nfaces_re_h = reference_element%get_number_horizontal_faces()\n"
+ " nfaces_re_v = reference_element%get_number_vertical_faces()\n"
+ " call reference_element%get_normals_to_horizontal_faces("
"normals_to_horiz_faces)\n"
- " call reference_element%get_normals_to_vertical_faces("
+ " call reference_element%get_normals_to_vertical_faces("
"normals_to_vert_faces)\n" in gen)
assert ("call testkern_mesh_ref_elem_props_code(nlayers_f1, a, "
"f1_data, ndf_w1, undf_w1, map_w1(:,cell), nfaces_re_h, "
@@ -312,24 +308,22 @@ def test_mesh_plus_face_quad_gen(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
gen = str(psy.gen).lower()
- assert (" qr_proxy = qr%get_quadrature_proxy()\n"
- " np_xyz_qr = qr_proxy%np_xyz\n"
- " nfaces_qr = qr_proxy%nfaces\n"
- " weights_xyz_qr => qr_proxy%weights_xyz\n"
- " !\n"
- " ! allocate basis/diff-basis arrays\n"
- " !\n"
- " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
- " allocate (basis_w1_qr(dim_w1, ndf_w1, np_xyz_qr, "
+ assert (" qr_proxy = qr%get_quadrature_proxy()\n"
+ " np_xyz_qr = qr_proxy%np_xyz\n"
+ " nfaces_qr = qr_proxy%nfaces\n"
+ " weights_xyz_qr => qr_proxy%weights_xyz\n"
+ "\n"
+ " ! allocate basis/diff-basis arrays\n"
+ " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
+ " allocate(basis_w1_qr(dim_w1,ndf_w1,np_xyz_qr,"
"nfaces_qr))" in gen)
- assert (" reference_element => mesh%get_reference_element()\n"
- " nfaces_re_h = reference_element%"
+ assert (" reference_element => mesh%get_reference_element()\n"
+ " nfaces_re_h = reference_element%"
"get_number_horizontal_faces()\n"
- " !\n"
- " ! initialise mesh properties\n"
- " !\n"
- " adjacent_face => mesh%get_adjacent_face()" in gen)
+ "\n"
+ " ! initialise mesh properties\n"
+ " adjacent_face => mesh%get_adjacent_face()" in gen)
assert ("call testkern_mesh_prop_face_qr_code(nlayers_f1, a, f1_data, "
"ndf_w1, undf_w1, map_w1(:,cell), basis_w1_qr, "
@@ -350,22 +344,27 @@ def test_multi_kernel_mesh_props(tmpdir):
gen = str(psy.gen).lower()
# Declarations
- assert (
- " real(kind=r_def), pointer :: weights_xyz_qr(:,:) => null()\n"
- " integer(kind=i_def) np_xyz_qr, nfaces_qr\n"
- " integer(kind=i_def), pointer :: adjacent_face(:,:) => null()\n"
- " real(kind=r_def), allocatable :: normals_to_horiz_faces(:,:), "
- "normals_to_vert_faces(:,:)\n"
- " integer(kind=i_def) nfaces_re_h, nfaces_re_v\n"
- " class(reference_element_type), pointer :: reference_element => "
- "null()\n" in gen)
+ assert ("real(kind=r_def), pointer, dimension(:,:) :: weights_xyz_qr => "
+ "null()\n") in gen
+ assert "integer(kind=i_def) :: np_xyz_qr\n" in gen
+ assert "integer(kind=i_def) :: nfaces_qr\n" in gen
+ assert ("integer(kind=i_def), pointer :: adjacent_face(:,:) => null()\n"
+ in gen)
+ assert ("real(kind=r_def), allocatable, dimension(:,:) :: "
+ "normals_to_horiz_faces" in gen)
+ assert ("real(kind=r_def), allocatable, dimension(:,:) :: "
+ "normals_to_vert_faces" in gen)
+ assert "integer(kind=i_def) :: nfaces_re_h\n" in gen
+ assert "integer(kind=i_def) :: nfaces_re_v\n" in gen
+ assert ("class(reference_element_type), pointer :: reference_element => "
+ "null()\n" in gen)
# Initialisations
assert "type(mesh_type), pointer :: mesh => null()" in gen
assert "nfaces_qr = qr_proxy%nfaces" in gen
assert (
- " reference_element => mesh%get_reference_element()\n"
- " nfaces_re_h = reference_element%get_number_horizontal_faces()\n"
- " nfaces_re_v = reference_element%get_number_vertical_faces()"
+ " reference_element => mesh%get_reference_element()\n"
+ " nfaces_re_h = reference_element%get_number_horizontal_faces()\n"
+ " nfaces_re_v = reference_element%get_number_vertical_faces()"
in gen)
assert "adjacent_face => mesh%get_adjacent_face()" in gen
# Call to kernel requiring props of the reference element & adjacent faces
diff --git a/src/psyclone/tests/domain/lfric/lfric_psy_test.py b/src/psyclone/tests/domain/lfric/lfric_psy_test.py
index b6fe6ad1b4..ee403ccd49 100644
--- a/src/psyclone/tests/domain/lfric/lfric_psy_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_psy_test.py
@@ -38,13 +38,13 @@
'''This module tests the LFRicPSy class found in the LFRic domain.
'''
-from collections import OrderedDict
import os
from psyclone.configuration import Config
-from psyclone.domain.lfric import LFRicPSy, LFRicConstants, LFRicInvokes
+from psyclone.domain.lfric import LFRicPSy, LFRicInvokes
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSy
+from psyclone.tests.lfric_build import LFRicBuild
BASE_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)),
os.pardir, os.pardir, "test_files", "dynamo0p3")
@@ -71,14 +71,6 @@ def test_lfricpsy():
assert isinstance(lfric_psy, LFRicPSy)
assert issubclass(LFRicPSy, PSy)
assert isinstance(lfric_psy._invokes, LFRicInvokes)
- infrastructure_modules = lfric_psy._infrastructure_modules
- assert isinstance(infrastructure_modules, OrderedDict)
- assert list(infrastructure_modules["constants_mod"]) == ["i_def"]
- const = LFRicConstants()
- names = set(item["module"] for item in const.DATA_TYPE_MAP.values())
- assert len(names)+1 == len(infrastructure_modules)
- for module_name in names:
- assert infrastructure_modules[module_name] == set()
def test_lfricpsy_kind():
@@ -92,7 +84,7 @@ def test_lfricpsy_kind():
BASE_PATH, "15.12.3_single_pointwise_builtin.f90"), api="lfric")
lfric_psy = LFRicPSy(invoke_info)
result = str(lfric_psy.gen)
- assert "USE constants_mod, ONLY: r_def, i_def" in result
+ assert "use constants_mod\n" in result
assert "f1_data(df) = 0.0\n" in result
# 2: Literal kind value is declared (trying with two cases to check)
for kind_name in ["r_solver", "r_tran"]:
@@ -100,7 +92,7 @@ def test_lfricpsy_kind():
invoke_info.calls[0].kcalls[0].args[1]._datatype = ("real", kind_name)
lfric_psy = LFRicPSy(invoke_info)
result = str(lfric_psy.gen).lower()
- assert f"use constants_mod, only: {kind_name}, r_def, i_def" in result
+ assert "use constants_mod\n" in result
assert f"f1_data(df) = 0.0_{kind_name}" in result
@@ -116,18 +108,6 @@ def test_lfricpsy_names():
assert lfric_psy.orig_name == supplied_name
-def test_lfricpsy_inf_modules():
- '''Check that the infrastructure_modules() method of LFRicPSy (which
- is implemented as a property) behaves as expected. In this case we
- check that it returns the values set up in the initialisation of
- an instance of LFRicPSy.
-
- '''
- lfric_psy = LFRicPSy(DummyInvokeInfo())
- assert (lfric_psy.infrastructure_modules is
- lfric_psy._infrastructure_modules)
-
-
def test_lfricpsy_gen_no_invoke():
'''Check that the gen() method of LFRicPSy behaves as expected for a
minimal psy-layer when the algorithm layer does not contain any
@@ -135,17 +115,20 @@ def test_lfricpsy_gen_no_invoke():
'''
expected_result = (
- " MODULE hello_psy\n"
- " USE constants_mod, ONLY: i_def\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " END MODULE hello_psy")
+ "module hello_psy\n"
+ " use constants_mod\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ "\n"
+ "end module hello_psy\n")
lfric_psy = LFRicPSy(DummyInvokeInfo(name="hello"))
result = lfric_psy.gen
assert str(result) == expected_result
-def test_lfricpsy_gen(monkeypatch):
+def test_lfricpsy_gen(monkeypatch, tmpdir):
'''Check that the gen() method of LFRicPSy behaves as expected when
generating a psy-layer from an algorithm layer containing invoke
calls. Simply check that the PSy-layer code for the invoke call is
@@ -169,22 +152,20 @@ def test_lfricpsy_gen(monkeypatch):
config.distributed_memory = True
lfric_psy = LFRicPSy(invoke_info)
result = str(lfric_psy.gen)
- expected = (
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_code(nlayers_f1, ginger, f1_data, "
+ assert (
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_code(nlayers_f1, ginger, f1_data, "
"f2_data, m1_data, m2_data, ndf_w1, undf_w1, "
"map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3, "
"map_w3(:,cell))\n"
- " END DO\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the above loop\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to a real "
+ " enddo\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the above "
+ "loop(s)\n"
+ " call f1_proxy%set_dirty()\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to a real "
"scalar value)\n"
- " f1_data(df) = 0.0_r_def\n"
- " END DO\n")
-
- assert expected in result
+ " f1_data(df) = 0.0_r_def\n"
+ " enddo\n" in result)
+ assert LFRicBuild(tmpdir).code_compiles(lfric_psy)
diff --git a/src/psyclone/tests/domain/lfric/lfric_ref_elem_stubgen_test.py b/src/psyclone/tests/domain/lfric/lfric_ref_elem_stubgen_test.py
index 22cf138eaa..c9f7f2db88 100644
--- a/src/psyclone/tests/domain/lfric/lfric_ref_elem_stubgen_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_ref_elem_stubgen_test.py
@@ -75,7 +75,7 @@
'''
-def test_refelem_stub_gen():
+def test_refelem_stub_gen(fortran_writer):
''' Check that correct kernel stub code is produced when the kernel
metadata contain reference element properties. '''
ast = fpapi.parse(os.path.join(BASE_PATH,
@@ -84,48 +84,49 @@ def test_refelem_stub_gen():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- gen = str(kernel.gen_stub)
+ gen = fortran_writer(kernel.gen_stub)
- output = (
- " MODULE testkern_ref_elem_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE testkern_ref_elem_code(nlayers, rscalar_1, "
- "field_2_w1, field_3_w2, field_4_w2, field_5_w3, ndf_w1, undf_w1, "
- "map_w1, ndf_w2, undf_w2, map_w2, ndf_w3, undf_w3, map_w3, "
- "nfaces_re_h, nfaces_re_v, normals_to_horiz_faces, "
- "normals_to_vert_faces)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w1\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w1) :: map_w1\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2) :: map_w2\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w3\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w3) :: map_w3\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w1, undf_w2, undf_w3\n"
- " REAL(KIND=r_def), intent(in) :: rscalar_1\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w1) :: "
- "field_2_w1\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2) :: "
- "field_3_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2) :: "
- "field_4_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w3) :: "
- "field_5_w3\n"
- " INTEGER(KIND=i_def), intent(in) :: nfaces_re_h\n"
- " INTEGER(KIND=i_def), intent(in) :: nfaces_re_v\n"
- " REAL(KIND=r_def), intent(in), dimension(3,nfaces_re_h) :: "
- "normals_to_horiz_faces\n"
- " REAL(KIND=r_def), intent(in), dimension(3,nfaces_re_v) :: "
- "normals_to_vert_faces\n"
- " END SUBROUTINE testkern_ref_elem_code\n"
- " END MODULE testkern_ref_elem_mod")
- assert output in gen
+ assert """\
+module testkern_ref_elem_mod
+ implicit none
+ public
+ contains
+ subroutine testkern_ref_elem_code(nlayers, rscalar_1, field_2_w1, \
+field_3_w2, field_4_w2, field_5_w3, ndf_w1, undf_w1, map_w1, ndf_w2, \
+undf_w2, map_w2, ndf_w3, undf_w3, map_w3, nfaces_re_h, nfaces_re_v, \
+normals_to_horiz_faces, normals_to_vert_faces)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w1
+ integer(kind=i_def), dimension(ndf_w1), intent(in) :: map_w1
+ integer(kind=i_def), intent(in) :: ndf_w2
+ integer(kind=i_def), dimension(ndf_w2), intent(in) :: map_w2
+ integer(kind=i_def), intent(in) :: ndf_w3
+ integer(kind=i_def), dimension(ndf_w3), intent(in) :: map_w3
+ integer(kind=i_def), intent(in) :: undf_w1
+ integer(kind=i_def), intent(in) :: undf_w2
+ integer(kind=i_def), intent(in) :: undf_w3
+ real(kind=r_def), intent(in) :: rscalar_1
+ real(kind=r_def), dimension(undf_w1), intent(inout) :: field_2_w1
+ real(kind=r_def), dimension(undf_w2), intent(in) :: field_3_w2
+ real(kind=r_def), dimension(undf_w2), intent(in) :: field_4_w2
+ real(kind=r_def), dimension(undf_w3), intent(in) :: field_5_w3
+ integer(kind=i_def), intent(in) :: nfaces_re_h
+ integer(kind=i_def), intent(in) :: nfaces_re_v
+ real(kind=r_def), dimension(3,nfaces_re_h), intent(in) \
+:: normals_to_horiz_faces
+ real(kind=r_def), dimension(3,nfaces_re_v), intent(in) \
+:: normals_to_vert_faces
-def test_refelem_quad_stub_gen():
+
+ end subroutine testkern_ref_elem_code
+
+end module testkern_ref_elem_mod
+""" == gen
+
+
+def test_refelem_quad_stub_gen(fortran_writer):
''' Check that correct stub code is produced when the kernel metadata
contain reference element and quadrature properties (quadrature
properties should be placed at the end of subroutine argument list). '''
@@ -133,30 +134,35 @@ def test_refelem_quad_stub_gen():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- gen = str(kernel.gen_stub)
+ gen = fortran_writer(kernel.gen_stub)
output1 = (
- " SUBROUTINE testkern_refelem_quad_code(nlayers, field_1_w1, "
+ " subroutine testkern_refelem_quad_code(nlayers, field_1_w1, "
"field_2_wtheta, ndf_w1, undf_w1, map_w1, basis_w1_qr_xyoz, "
"ndf_wtheta, undf_wtheta, map_wtheta, basis_wtheta_qr_xyoz, "
"nfaces_re, normals_to_faces, out_normals_to_faces, np_xy_qr_xyoz, "
"np_z_qr_xyoz, weights_xy_qr_xyoz, weights_z_qr_xyoz)")
assert output1 in gen
- output2 = (
- " INTEGER(KIND=i_def), intent(in) :: np_xy_qr_xyoz, "
- "np_z_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), "
- "dimension(3,ndf_w1,np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w1_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), "
- "dimension(1,ndf_wtheta,np_xy_qr_xyoz,np_z_qr_xyoz) :: "
- "basis_wtheta_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_xy_qr_xyoz) :: "
- "weights_xy_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_z_qr_xyoz) :: "
- "weights_z_qr_xyoz\n"
- " INTEGER(KIND=i_def), intent(in) :: nfaces_re\n"
- " REAL(KIND=r_def), intent(in), dimension(3,nfaces_re) :: "
- "normals_to_faces\n"
- " REAL(KIND=r_def), intent(in), dimension(3,nfaces_re) :: "
- "out_normals_to_faces")
- assert output2 in gen
+ assert """\
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w1
+ integer(kind=i_def), dimension(ndf_w1), intent(in) :: map_w1
+ integer(kind=i_def), intent(in) :: ndf_wtheta
+ integer(kind=i_def), dimension(ndf_wtheta), intent(in) :: map_wtheta
+ integer(kind=i_def), intent(in) :: undf_w1
+ integer(kind=i_def), intent(in) :: undf_wtheta
+ real(kind=r_def), dimension(undf_w1), intent(in) :: field_1_w1
+ real(kind=r_def), dimension(undf_wtheta), intent(inout) :: field_2_wtheta
+ integer(kind=i_def), intent(in) :: np_xy_qr_xyoz
+ integer(kind=i_def), intent(in) :: np_z_qr_xyoz
+ real(kind=r_def), dimension(3,ndf_w1,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w1_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_wtheta,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_wtheta_qr_xyoz
+ real(kind=r_def), dimension(np_xy_qr_xyoz), intent(in) \
+:: weights_xy_qr_xyoz
+ real(kind=r_def), dimension(np_z_qr_xyoz), intent(in) :: weights_z_qr_xyoz
+ integer(kind=i_def), intent(in) :: nfaces_re
+ real(kind=r_def), dimension(3,nfaces_re), intent(in) :: normals_to_faces
+ real(kind=r_def), dimension(3,nfaces_re), intent(in) \
+:: out_normals_to_faces""" in gen
diff --git a/src/psyclone/tests/domain/lfric/lfric_scalar_codegen_test.py b/src/psyclone/tests/domain/lfric/lfric_scalar_codegen_test.py
index fef442eef7..00325e7fd3 100644
--- a/src/psyclone/tests/domain/lfric/lfric_scalar_codegen_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_scalar_codegen_test.py
@@ -65,94 +65,96 @@ def test_real_scalar(tmpdir):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
expected = (
- " SUBROUTINE invoke_0_testkern_type(a, f1, f2, m1, m2)\n"
- " USE testkern_mod, ONLY: testkern_code\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " REAL(KIND=r_def), intent(in) :: a\n"
- " TYPE(field_type), intent(in) :: f1, f2, m1, m2\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, f2_proxy, m1_proxy, m2_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w1(:,:) => null(), "
- "map_w2(:,:) => null(), map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, ndf_w3, "
- "undf_w3\n"
- " INTEGER(KIND=i_def) max_halo_depth_mesh\n"
- " TYPE(mesh_type), pointer :: mesh => null()\n"
- " !\n"
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " f2_proxy = f2%get_proxy()\n"
- " f2_data => f2_proxy%data\n"
- " m1_proxy = m1%get_proxy()\n"
- " m1_data => m1_proxy%data\n"
- " m2_proxy = m2%get_proxy()\n"
- " m2_data => m2_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
- " !\n"
- " ! Create a mesh object\n"
- " !\n"
- " mesh => f1_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh = mesh%get_halo_depth()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
- " undf_w1 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
- " undf_w2 = f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = m2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = mesh%get_last_halo_cell(1)\n"
- " !\n"
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_code(nlayers_f1, a, f1_data, f2_data,"
+ " subroutine invoke_0_testkern_type(a, f1, f2, m1, m2)\n"
+ " use mesh_mod, only : mesh_type\n"
+ " use testkern_mod, only : testkern_code\n"
+ " real(kind=r_def), intent(in) :: a\n"
+ " type(field_type), intent(in) :: f1\n"
+ " type(field_type), intent(in) :: f2\n"
+ " type(field_type), intent(in) :: m1\n"
+ " type(field_type), intent(in) :: m2\n"
+ " integer(kind=i_def) :: cell\n"
+ " integer(kind=i_def) :: loop0_start\n"
+ " integer(kind=i_def) :: loop0_stop\n"
+ " type(mesh_type), pointer :: mesh => null()\n"
+ " integer(kind=i_def) :: max_halo_depth_mesh\n"
+ " real(kind=r_def), pointer, dimension(:) :: f1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: f2_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m2_data => null()\n"
+ " integer(kind=i_def) :: nlayers_f1\n"
+ " integer(kind=i_def) :: ndf_w1\n"
+ " integer(kind=i_def) :: undf_w1\n"
+ " integer(kind=i_def) :: ndf_w2\n"
+ " integer(kind=i_def) :: undf_w2\n"
+ " integer(kind=i_def) :: ndf_w3\n"
+ " integer(kind=i_def) :: undf_w3\n"
+ " integer(kind=i_def), pointer :: map_w1(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w2(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w3(:,:) => null()\n"
+ " type(field_proxy_type) :: f1_proxy\n"
+ " type(field_proxy_type) :: f2_proxy\n"
+ " type(field_proxy_type) :: m1_proxy\n"
+ " type(field_proxy_type) :: m2_proxy\n"
+ "\n"
+ " ! Initialise field and/or operator proxies\n"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ " f2_proxy = f2%get_proxy()\n"
+ " f2_data => f2_proxy%data\n"
+ " m1_proxy = m1%get_proxy()\n"
+ " m1_data => m1_proxy%data\n"
+ " m2_proxy = m2%get_proxy()\n"
+ " m2_data => m2_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
+ "\n"
+ " ! Create a mesh object\n"
+ " mesh => f1_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh = mesh%get_halo_depth()\n"
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
+ " undf_w1 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2\n"
+ " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
+ " undf_w2 = f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = m2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = mesh%get_last_halo_cell(1)\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=1)) then\n"
+ " call m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_code(nlayers_f1, a, f1_data, f2_data,"
" m1_data, m2_data, ndf_w1, undf_w1, map_w1(:,cell), "
"ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3, map_w3(:,cell))\n")
assert expected in generated_code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_int_scalar(tmpdir):
@@ -167,97 +169,99 @@ def test_int_scalar(tmpdir):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
expected = (
- " SUBROUTINE invoke_0_testkern_one_int_scalar_type"
+ " subroutine invoke_0_testkern_one_int_scalar_type"
"(f1, iflag, f2, m1, m2)\n"
- " USE testkern_one_int_scalar_mod, ONLY: "
+ " use mesh_mod, only : mesh_type\n"
+ " use testkern_one_int_scalar_mod, only : "
"testkern_one_int_scalar_code\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " INTEGER(KIND=i_def), intent(in) :: iflag\n"
- " TYPE(field_type), intent(in) :: f1, f2, m1, m2\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, f2_proxy, m1_proxy, m2_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w1(:,:) => null(), "
- "map_w2(:,:) => null(), map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, ndf_w3, "
- "undf_w3\n"
- " INTEGER(KIND=i_def) max_halo_depth_mesh\n"
- " TYPE(mesh_type), pointer :: mesh => null()\n"
- " !\n"
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " f2_proxy = f2%get_proxy()\n"
- " f2_data => f2_proxy%data\n"
- " m1_proxy = m1%get_proxy()\n"
- " m1_data => m1_proxy%data\n"
- " m2_proxy = m2%get_proxy()\n"
- " m2_data => m2_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
- " !\n"
- " ! Create a mesh object\n"
- " !\n"
- " mesh => f1_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh = mesh%get_halo_depth()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
- " undf_w1 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
- " undf_w2 = f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = m2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = mesh%get_last_halo_cell(1)\n"
- " !\n"
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_one_int_scalar_code(nlayers_f1, f1_data, "
+ " type(field_type), intent(in) :: f1\n"
+ " integer(kind=i_def), intent(in) :: iflag\n"
+ " type(field_type), intent(in) :: f2\n"
+ " type(field_type), intent(in) :: m1\n"
+ " type(field_type), intent(in) :: m2\n"
+ " integer(kind=i_def) :: cell\n"
+ " integer(kind=i_def) :: loop0_start\n"
+ " integer(kind=i_def) :: loop0_stop\n"
+ " type(mesh_type), pointer :: mesh => null()\n"
+ " integer(kind=i_def) :: max_halo_depth_mesh\n"
+ " real(kind=r_def), pointer, dimension(:) :: f1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: f2_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m2_data => null()\n"
+ " integer(kind=i_def) :: nlayers_f1\n"
+ " integer(kind=i_def) :: ndf_w1\n"
+ " integer(kind=i_def) :: undf_w1\n"
+ " integer(kind=i_def) :: ndf_w2\n"
+ " integer(kind=i_def) :: undf_w2\n"
+ " integer(kind=i_def) :: ndf_w3\n"
+ " integer(kind=i_def) :: undf_w3\n"
+ " integer(kind=i_def), pointer :: map_w1(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w2(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w3(:,:) => null()\n"
+ " type(field_proxy_type) :: f1_proxy\n"
+ " type(field_proxy_type) :: f2_proxy\n"
+ " type(field_proxy_type) :: m1_proxy\n"
+ " type(field_proxy_type) :: m2_proxy\n"
+ "\n"
+ " ! Initialise field and/or operator proxies\n"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ " f2_proxy = f2%get_proxy()\n"
+ " f2_data => f2_proxy%data\n"
+ " m1_proxy = m1%get_proxy()\n"
+ " m1_data => m1_proxy%data\n"
+ " m2_proxy = m2%get_proxy()\n"
+ " m2_data => m2_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
+ "\n"
+ " ! Create a mesh object\n"
+ " mesh => f1_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh = mesh%get_halo_depth()\n"
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
+ " undf_w1 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2\n"
+ " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
+ " undf_w2 = f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = m2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = mesh%get_last_halo_cell(1)\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=1)) then\n"
+ " call m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_one_int_scalar_code(nlayers_f1, f1_data, "
"iflag, f2_data, m1_data, m2_data, ndf_w1, undf_w1, "
"map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3, "
"map_w3(:,cell))\n")
assert expected in generated_code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_two_real_scalars(tmpdir):
@@ -272,97 +276,100 @@ def test_two_real_scalars(tmpdir):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
expected = (
- " SUBROUTINE invoke_0_testkern_two_real_scalars_type(a, f1, f2, "
+ " subroutine invoke_0_testkern_two_real_scalars_type(a, f1, f2, "
"m1, m2, b)\n"
- " USE testkern_two_real_scalars_mod, ONLY: "
+ " use mesh_mod, only : mesh_type\n"
+ " use testkern_two_real_scalars_mod, only : "
"testkern_two_real_scalars_code\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " REAL(KIND=r_def), intent(in) :: a, b\n"
- " TYPE(field_type), intent(in) :: f1, f2, m1, m2\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, f2_proxy, m1_proxy, m2_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w1(:,:) => null(), "
- "map_w2(:,:) => null(), map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, ndf_w3, "
- "undf_w3\n"
- " INTEGER(KIND=i_def) max_halo_depth_mesh\n"
- " TYPE(mesh_type), pointer :: mesh => null()\n"
- " !\n"
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " f2_proxy = f2%get_proxy()\n"
- " f2_data => f2_proxy%data\n"
- " m1_proxy = m1%get_proxy()\n"
- " m1_data => m1_proxy%data\n"
- " m2_proxy = m2%get_proxy()\n"
- " m2_data => m2_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
- " !\n"
- " ! Create a mesh object\n"
- " !\n"
- " mesh => f1_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh = mesh%get_halo_depth()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
- " undf_w1 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
- " undf_w2 = f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = m2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = mesh%get_last_halo_cell(1)\n"
- " !\n"
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_two_real_scalars_code(nlayers_f1, a, "
+ " real(kind=r_def), intent(in) :: a\n"
+ " type(field_type), intent(in) :: f1\n"
+ " type(field_type), intent(in) :: f2\n"
+ " type(field_type), intent(in) :: m1\n"
+ " type(field_type), intent(in) :: m2\n"
+ " real(kind=r_def), intent(in) :: b\n"
+ " integer(kind=i_def) :: cell\n"
+ " integer(kind=i_def) :: loop0_start\n"
+ " integer(kind=i_def) :: loop0_stop\n"
+ " type(mesh_type), pointer :: mesh => null()\n"
+ " integer(kind=i_def) :: max_halo_depth_mesh\n"
+ " real(kind=r_def), pointer, dimension(:) :: f1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: f2_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m2_data => null()\n"
+ " integer(kind=i_def) :: nlayers_f1\n"
+ " integer(kind=i_def) :: ndf_w1\n"
+ " integer(kind=i_def) :: undf_w1\n"
+ " integer(kind=i_def) :: ndf_w2\n"
+ " integer(kind=i_def) :: undf_w2\n"
+ " integer(kind=i_def) :: ndf_w3\n"
+ " integer(kind=i_def) :: undf_w3\n"
+ " integer(kind=i_def), pointer :: map_w1(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w2(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w3(:,:) => null()\n"
+ " type(field_proxy_type) :: f1_proxy\n"
+ " type(field_proxy_type) :: f2_proxy\n"
+ " type(field_proxy_type) :: m1_proxy\n"
+ " type(field_proxy_type) :: m2_proxy\n"
+ "\n"
+ " ! Initialise field and/or operator proxies\n"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ " f2_proxy = f2%get_proxy()\n"
+ " f2_data => f2_proxy%data\n"
+ " m1_proxy = m1%get_proxy()\n"
+ " m1_data => m1_proxy%data\n"
+ " m2_proxy = m2%get_proxy()\n"
+ " m2_data => m2_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
+ "\n"
+ " ! Create a mesh object\n"
+ " mesh => f1_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh = mesh%get_halo_depth()\n"
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
+ " undf_w1 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2\n"
+ " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
+ " undf_w2 = f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = m2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = mesh%get_last_halo_cell(1)\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=1)) then\n"
+ " call m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_two_real_scalars_code(nlayers_f1, a, "
"f1_data, f2_data, m1_data, m2_data, "
"b, ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, "
"map_w2(:,cell), ndf_w3, undf_w3, map_w3(:,cell))\n")
assert expected in generated_code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_two_int_scalars(tmpdir):
@@ -376,107 +383,111 @@ def test_two_int_scalars(tmpdir):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
expected = (
- " SUBROUTINE invoke_0(iflag, f1, f2, m1, m2, istep)\n"
- " USE testkern_two_int_scalars_mod, ONLY: "
+ " subroutine invoke_0(iflag, f1, f2, m1, m2, istep)\n"
+ " use mesh_mod, only : mesh_type\n"
+ " use testkern_two_int_scalars_mod, only : "
"testkern_two_int_scalars_code\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " INTEGER(KIND=i_def), intent(in) :: iflag, istep\n"
- " TYPE(field_type), intent(in) :: f1, f2, m1, m2\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop1_start, loop1_stop\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, f2_proxy, m1_proxy, m2_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w1(:,:) => null(), "
- "map_w2(:,:) => null(), map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, ndf_w3, "
- "undf_w3\n"
- " INTEGER(KIND=i_def) max_halo_depth_mesh\n"
- " TYPE(mesh_type), pointer :: mesh => null()\n"
- " !\n"
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " f2_proxy = f2%get_proxy()\n"
- " f2_data => f2_proxy%data\n"
- " m1_proxy = m1%get_proxy()\n"
- " m1_data => m1_proxy%data\n"
- " m2_proxy = m2%get_proxy()\n"
- " m2_data => m2_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
- " !\n"
- " ! Create a mesh object\n"
- " !\n"
- " mesh => f1_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh = mesh%get_halo_depth()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
- " undf_w1 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
- " undf_w2 = f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = m2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = mesh%get_last_halo_cell(1)\n"
- " loop1_start = 1\n"
- " loop1_stop = mesh%get_last_halo_cell(1)\n"
- " !\n"
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_two_int_scalars_code(nlayers_f1, iflag, "
+ " integer(kind=i_def), intent(in) :: iflag\n"
+ " type(field_type), intent(in) :: f1\n"
+ " type(field_type), intent(in) :: f2\n"
+ " type(field_type), intent(in) :: m1\n"
+ " type(field_type), intent(in) :: m2\n"
+ " integer(kind=i_def), intent(in) :: istep\n"
+ " integer(kind=i_def) :: cell\n"
+ " integer(kind=i_def) :: loop0_start\n"
+ " integer(kind=i_def) :: loop0_stop\n"
+ " integer(kind=i_def) :: loop1_start\n"
+ " integer(kind=i_def) :: loop1_stop\n"
+ " type(mesh_type), pointer :: mesh => null()\n"
+ " integer(kind=i_def) :: max_halo_depth_mesh\n"
+ " real(kind=r_def), pointer, dimension(:) :: f1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: f2_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m2_data => null()\n"
+ " integer(kind=i_def) :: nlayers_f1\n"
+ " integer(kind=i_def) :: ndf_w1\n"
+ " integer(kind=i_def) :: undf_w1\n"
+ " integer(kind=i_def) :: ndf_w2\n"
+ " integer(kind=i_def) :: undf_w2\n"
+ " integer(kind=i_def) :: ndf_w3\n"
+ " integer(kind=i_def) :: undf_w3\n"
+ " integer(kind=i_def), pointer :: map_w1(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w2(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w3(:,:) => null()\n"
+ " type(field_proxy_type) :: f1_proxy\n"
+ " type(field_proxy_type) :: f2_proxy\n"
+ " type(field_proxy_type) :: m1_proxy\n"
+ " type(field_proxy_type) :: m2_proxy\n"
+ "\n"
+ " ! Initialise field and/or operator proxies\n"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ " f2_proxy = f2%get_proxy()\n"
+ " f2_data => f2_proxy%data\n"
+ " m1_proxy = m1%get_proxy()\n"
+ " m1_data => m1_proxy%data\n"
+ " m2_proxy = m2%get_proxy()\n"
+ " m2_data => m2_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
+ "\n"
+ " ! Create a mesh object\n"
+ " mesh => f1_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh = mesh%get_halo_depth()\n"
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
+ " undf_w1 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2\n"
+ " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
+ " undf_w2 = f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = m2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = mesh%get_last_halo_cell(1)\n"
+ " loop1_start = 1\n"
+ " loop1_stop = mesh%get_last_halo_cell(1)\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=1)) then\n"
+ " call m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_two_int_scalars_code(nlayers_f1, iflag, "
"f1_data, f2_data, m1_data, m2_data, istep, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), "
"ndf_w3, undf_w3, map_w3(:,cell))\n")
assert expected in generated_code
# Check that we pass iflag by value in the second kernel call
expected = (
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL testkern_two_int_scalars_code(nlayers_f1, 1, "
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call testkern_two_int_scalars_code(nlayers_f1, 1, "
"f1_data, f2_data, m1_data, m2_data, iflag, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), "
"ndf_w3, undf_w3, map_w3(:,cell))\n")
assert expected in generated_code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_three_scalars(tmpdir):
@@ -489,102 +500,106 @@ def test_three_scalars(tmpdir):
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
generated_code = str(psy.gen)
expected = (
- " MODULE single_invoke_psy\n"
- " USE constants_mod, ONLY: r_def, l_def, i_def\n"
- " USE field_mod, ONLY: field_type, field_proxy_type\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_testkern_three_scalars_type(a, f1, f2, "
- "m1, m2, lswitch, istep)\n"
- " USE testkern_three_scalars_mod, ONLY: "
+ "module single_invoke_psy\n"
+ " use constants_mod\n"
+ " use field_mod, only : field_proxy_type, field_type\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ " subroutine invoke_0_testkern_three_scalars_type(a, f1, f2, m1, "
+ "m2, lswitch, istep)\n"
+ " use mesh_mod, only : mesh_type\n"
+ " use testkern_three_scalars_mod, only : "
"testkern_three_scalars_code\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " REAL(KIND=r_def), intent(in) :: a\n"
- " INTEGER(KIND=i_def), intent(in) :: istep\n"
- " LOGICAL(KIND=l_def), intent(in) :: lswitch\n"
- " TYPE(field_type), intent(in) :: f1, f2, m1, m2\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, f2_proxy, m1_proxy, m2_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w1(:,:) => null(), "
- "map_w2(:,:) => null(), map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, ndf_w3, "
- "undf_w3\n"
- " INTEGER(KIND=i_def) max_halo_depth_mesh\n"
- " TYPE(mesh_type), pointer :: mesh => null()\n"
- " !\n"
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " f2_proxy = f2%get_proxy()\n"
- " f2_data => f2_proxy%data\n"
- " m1_proxy = m1%get_proxy()\n"
- " m1_data => m1_proxy%data\n"
- " m2_proxy = m2%get_proxy()\n"
- " m2_data => m2_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
- " !\n"
- " ! Create a mesh object\n"
- " !\n"
- " mesh => f1_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh = mesh%get_halo_depth()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
- " undf_w1 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
- " undf_w2 = f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = m2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = mesh%get_last_halo_cell(1)\n"
- " !\n"
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_three_scalars_code(nlayers_f1, a, f1_data, "
- "f2_data, m1_data, m2_data, lswitch, istep, ndf_w1, "
- "undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, "
- "undf_w3, map_w3(:,cell))\n")
+ " real(kind=r_def), intent(in) :: a\n"
+ " type(field_type), intent(in) :: f1\n"
+ " type(field_type), intent(in) :: f2\n"
+ " type(field_type), intent(in) :: m1\n"
+ " type(field_type), intent(in) :: m2\n"
+ " logical(kind=l_def), intent(in) :: lswitch\n"
+ " integer(kind=i_def), intent(in) :: istep\n"
+ " integer(kind=i_def) :: cell\n"
+ " integer(kind=i_def) :: loop0_start\n"
+ " integer(kind=i_def) :: loop0_stop\n"
+ " type(mesh_type), pointer :: mesh => null()\n"
+ " integer(kind=i_def) :: max_halo_depth_mesh\n"
+ " real(kind=r_def), pointer, dimension(:) :: f1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: f2_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m1_data => null()\n"
+ " real(kind=r_def), pointer, dimension(:) :: m2_data => null()\n"
+ " integer(kind=i_def) :: nlayers_f1\n"
+ " integer(kind=i_def) :: ndf_w1\n"
+ " integer(kind=i_def) :: undf_w1\n"
+ " integer(kind=i_def) :: ndf_w2\n"
+ " integer(kind=i_def) :: undf_w2\n"
+ " integer(kind=i_def) :: ndf_w3\n"
+ " integer(kind=i_def) :: undf_w3\n"
+ " integer(kind=i_def), pointer :: map_w1(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w2(:,:) => null()\n"
+ " integer(kind=i_def), pointer :: map_w3(:,:) => null()\n"
+ " type(field_proxy_type) :: f1_proxy\n"
+ " type(field_proxy_type) :: f2_proxy\n"
+ " type(field_proxy_type) :: m1_proxy\n"
+ " type(field_proxy_type) :: m2_proxy\n"
+ "\n"
+ " ! Initialise field and/or operator proxies\n"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ " f2_proxy = f2%get_proxy()\n"
+ " f2_data => f2_proxy%data\n"
+ " m1_proxy = m1%get_proxy()\n"
+ " m1_data => m1_proxy%data\n"
+ " m2_proxy = m2%get_proxy()\n"
+ " m2_data => m2_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
+ "\n"
+ " ! Create a mesh object\n"
+ " mesh => f1_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh = mesh%get_halo_depth()\n"
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
+ " undf_w1 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2\n"
+ " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
+ " undf_w2 = f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = m2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = mesh%get_last_halo_cell(1)\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=1)) then\n"
+ " call m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_three_scalars_code(nlayers_f1, a, f1_data, "
+ "f2_data, m1_data, m2_data, lswitch, istep, ndf_w1, undf_w1, "
+ "map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3,"
+ " map_w3(:,cell))\n")
assert expected in generated_code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
diff --git a/src/psyclone/tests/domain/lfric/lfric_scalar_mdata_test.py b/src/psyclone/tests/domain/lfric/lfric_scalar_mdata_test.py
index 81faad0fbf..df0c2bdbd0 100644
--- a/src/psyclone/tests/domain/lfric/lfric_scalar_mdata_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_scalar_mdata_test.py
@@ -50,10 +50,9 @@
LFRicKern, LFRicKernMetadata,
LFRicScalarArgs)
from psyclone.errors import InternalError, GenerationError
-from psyclone.f2pygen import ModuleGen
from psyclone.parse.algorithm import parse
from psyclone.parse.utils import ParseError
-from psyclone.psyGen import FORTRAN_INTENT_NAMES, PSyFactory
+from psyclone.psyGen import PSyFactory
# Constants
BASE_PATH = os.path.join(
@@ -325,61 +324,13 @@ def test_lfricscalars_call_err1():
scalar_arg = kernel.arguments.args[0]
scalar_arg._intrinsic_type = "double-type"
with pytest.raises(InternalError) as err:
- LFRicScalarArgs(invoke)._invoke_declarations(ModuleGen(name="my_mod"))
+ LFRicScalarArgs(invoke).invoke_declarations()
assert ("Found unsupported intrinsic types for the scalar arguments "
"['a'] to Invoke 'invoke_0_testkern_three_scalars_type'. "
"Supported types are ['real', 'integer', 'logical']."
in str(err.value))
-def test_lfricscalars_call_err2():
- '''Check that LFRicScalarArgs _create_declarations method raises the
- expected internal errors for real, integer and logical scalars if
- neither invoke nor kernel is set.
-
- '''
- _, invoke_info = parse(
- os.path.join(BASE_PATH,
- "1.7_single_invoke_3scalar.f90"),
- api=TEST_API)
- psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
- invoke = psy.invokes.invoke_list[0]
- scalar_args = LFRicScalarArgs(invoke)
- node = ModuleGen("prog")
- # Set up information that _create_declarations requires. Note,
- # this method also calls _create_declarations.
- scalar_args._invoke_declarations(node)
-
- # Sabotage code so that a call to _create declarations raises the
- # required exceptions.
- scalar_args._invoke = None
-
- # The first exception comes from real scalars.
- with pytest.raises(InternalError) as error:
- scalar_args._create_declarations(node)
- assert ("Expected the declaration of real scalar kernel arguments to be "
- "for either an invoke or a kernel stub, but it is neither."
- in str(error.value))
-
- # Remove real scalars so we get the exception for integer scalars.
- for intent in FORTRAN_INTENT_NAMES:
- scalar_args._real_scalars[intent] = None
- with pytest.raises(InternalError) as error:
- scalar_args._create_declarations(node)
- assert ("Expected the declaration of integer scalar kernel arguments to "
- "be for either an invoke or a kernel stub, but it is neither."
- in str(error.value))
-
- # Remove integer scalars so we get the exception for logical scalars.
- for intent in FORTRAN_INTENT_NAMES:
- scalar_args._integer_scalars[intent] = None
- with pytest.raises(InternalError) as error:
- scalar_args._create_declarations(node)
- assert ("Expected the declaration of logical scalar kernel arguments to "
- "be for either an invoke or a kernel stub, but it is neither."
- in str(error.value))
-
-
def test_lfricscalarargs_mp():
'''Check that the precision of a new scalar integer datatype is
declared in the psy-layer.
@@ -391,7 +342,7 @@ def test_lfricscalarargs_mp():
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
code = str(psy.gen).lower()
- assert "use constants_mod, only: roo_def, r_def, i_def" in code
+ assert "use constants_mod\n" in code
def test_lfricinvoke_uniq_declns_intent_scalar():
diff --git a/src/psyclone/tests/domain/lfric/lfric_scalar_stubgen_test.py b/src/psyclone/tests/domain/lfric/lfric_scalar_stubgen_test.py
index a97f979d32..95e6471afa 100644
--- a/src/psyclone/tests/domain/lfric/lfric_scalar_stubgen_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_scalar_stubgen_test.py
@@ -41,14 +41,12 @@
LFRic scalar arguments.
'''
-from __future__ import absolute_import, print_function
import os
import pytest
from fparser import api as fpapi
from psyclone.domain.lfric import (LFRicConstants, LFRicKern,
LFRicKernMetadata, LFRicScalarArgs)
-from psyclone.f2pygen import ModuleGen
from psyclone.errors import InternalError
from psyclone.gen_kernel_stub import generate
from psyclone.parse.utils import ParseError
@@ -62,7 +60,7 @@
def test_lfricscalars_stub_err():
- ''' Check that LFRicScalarArgs._stub_declarations() raises the
+ ''' Check that LFRicScalarArgs.stub_declarations() raises the
expected internal error if it encounters an unrecognised data
type of a scalar argument when generating a kernel stub.
@@ -77,7 +75,7 @@ def test_lfricscalars_stub_err():
arg = kernel.arguments.args[1]
arg.descriptor._data_type = "gh_invalid_scalar"
with pytest.raises(InternalError) as err:
- LFRicScalarArgs(kernel)._stub_declarations(ModuleGen(name="my_mod"))
+ LFRicScalarArgs(kernel).stub_declarations()
const = LFRicConstants()
assert (f"Found an unsupported data type 'gh_invalid_scalar' for the "
f"scalar argument 'iscalar_2'. Supported types are "
@@ -91,39 +89,40 @@ def test_stub_generate_with_scalars():
os.path.join(BASE_PATH, "testkern_three_scalars_mod.f90"),
api=TEST_API)
- expected = (
- " MODULE testkern_three_scalars_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE testkern_three_scalars_code(nlayers, rscalar_1, "
- "field_2_w1, field_3_w2, field_4_w2, field_5_w3, lscalar_6, "
- "iscalar_7, ndf_w1, undf_w1, map_w1, ndf_w2, undf_w2, map_w2, "
- "ndf_w3, undf_w3, map_w3)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w1\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w1) :: map_w1\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2) :: map_w2\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w3\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w3) :: map_w3\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w1, undf_w2, undf_w3\n"
- " REAL(KIND=r_def), intent(in) :: rscalar_1\n"
- " INTEGER(KIND=i_def), intent(in) :: iscalar_7\n"
- " LOGICAL(KIND=l_def), intent(in) :: lscalar_6\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w1) :: "
- "field_2_w1\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2) :: "
- "field_3_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2) :: "
- "field_4_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w3) :: "
- "field_5_w3\n"
- " END SUBROUTINE testkern_three_scalars_code\n"
- " END MODULE testkern_three_scalars_mod")
-
- assert expected in str(result)
+ expected = """\
+module testkern_three_scalars_mod
+ implicit none
+ public
+
+ contains
+ subroutine testkern_three_scalars_code(nlayers, rscalar_1, field_2_w1, \
+field_3_w2, field_4_w2, field_5_w3, lscalar_6, iscalar_7, ndf_w1, undf_w1, \
+map_w1, ndf_w2, undf_w2, map_w2, ndf_w3, undf_w3, map_w3)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w1
+ integer(kind=i_def), dimension(ndf_w1), intent(in) :: map_w1
+ integer(kind=i_def), intent(in) :: ndf_w2
+ integer(kind=i_def), dimension(ndf_w2), intent(in) :: map_w2
+ integer(kind=i_def), intent(in) :: ndf_w3
+ integer(kind=i_def), dimension(ndf_w3), intent(in) :: map_w3
+ integer(kind=i_def), intent(in) :: undf_w1
+ integer(kind=i_def), intent(in) :: undf_w2
+ integer(kind=i_def), intent(in) :: undf_w3
+ real(kind=r_def), intent(in) :: rscalar_1
+ integer(kind=i_def), intent(in) :: iscalar_7
+ logical(kind=l_def), intent(in) :: lscalar_6
+ real(kind=r_def), dimension(undf_w1), intent(inout) :: field_2_w1
+ real(kind=r_def), dimension(undf_w2), intent(in) :: field_3_w2
+ real(kind=r_def), dimension(undf_w2), intent(in) :: field_4_w2
+ real(kind=r_def), dimension(undf_w3), intent(in) :: field_5_w3
+
+
+ end subroutine testkern_three_scalars_code
+
+end module testkern_three_scalars_mod
+"""
+ assert expected == result
def test_stub_generate_with_scalar_sums_err():
diff --git a/src/psyclone/tests/domain/lfric/lfric_stencil_stubgen_test.py b/src/psyclone/tests/domain/lfric/lfric_stencil_stubgen_test.py
index 2c5291f097..450c5aac7c 100644
--- a/src/psyclone/tests/domain/lfric/lfric_stencil_stubgen_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_stencil_stubgen_test.py
@@ -50,7 +50,7 @@
TEST_API = "lfric"
-def test_stub_stencil_extent():
+def test_stub_stencil_extent(fortran_writer):
'''
Check that correct stub code is produced when there is a stencil
access
@@ -60,22 +60,22 @@ def test_stub_stencil_extent():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
+ generated_code = fortran_writer(kernel.gen_stub)
result1 = (
- "SUBROUTINE testkern_stencil_code(nlayers, field_1_w1, "
+ "subroutine testkern_stencil_code(nlayers, field_1_w1, "
"field_2_w2, field_2_stencil_size, field_2_stencil_dofmap, "
"field_3_w2, field_4_w3, ndf_w1, undf_w1, map_w1, ndf_w2, "
"undf_w2, map_w2, ndf_w3, undf_w3, map_w3)")
assert result1 in generated_code
- result2 = "INTEGER(KIND=i_def), intent(in) :: field_2_stencil_size"
+ result2 = "integer(kind=i_def), intent(in) :: field_2_stencil_size"
assert result2 in generated_code
assert (
- "INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_w2,field_2_stencil_size) :: field_2_stencil_dofmap"
+ "integer(kind=i_def), dimension(ndf_w2,field_2_stencil_size), "
+ "intent(in) :: field_2_stencil_dofmap"
in generated_code)
-def test_stub_cross2d_stencil():
+def test_stub_cross2d_stencil(fortran_writer):
'''
Check that the correct stub code is generated when using a CROSS2D
stencil
@@ -87,25 +87,24 @@ def test_stub_cross2d_stencil():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
- print(generated_code)
+ generated_code = fortran_writer(kernel.gen_stub)
result1 = (
- " SUBROUTINE testkern_stencil_cross2d_code(nlayers, field_1_w1, "
+ " subroutine testkern_stencil_cross2d_code(nlayers, field_1_w1, "
"field_2_w2, field_2_stencil_size, field_2_max_branch_length, "
"field_2_stencil_dofmap, field_3_w2, field_4_w3, ndf_w1, undf_w1, "
"map_w1, ndf_w2, undf_w2, map_w2, ndf_w3, undf_w3, map_w3)"
)
assert result1 in generated_code
- result2 = (
- " INTEGER(KIND=i_def), intent(in), dimension(4) :: "
- "field_2_stencil_size\n"
- " INTEGER(KIND=i_def), intent(in) :: field_2_max_branch_length\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2,"
- "field_2_max_branch_length,4) :: field_2_stencil_dofmap")
- assert result2 in generated_code
+ assert ("integer(kind=i_def), dimension(4), intent(in) :: "
+ "field_2_stencil_size\n" in generated_code)
+ assert ("integer(kind=i_def), intent(in) :: field_2_max_branch_length\n"
+ in generated_code)
+ assert ("integer(kind=i_def), dimension(ndf_w2,field_2_max_branch_length,"
+ "4), intent(in) :: field_2_stencil_dofmap"
+ in generated_code)
-def test_stub_stencil_direction():
+def test_stub_stencil_direction(fortran_writer):
'''
Check that correct stub code is produced when there is a stencil
access which requires a direction argument
@@ -116,22 +115,19 @@ def test_stub_stencil_direction():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
- result1 = (
- " SUBROUTINE testkern_stencil_xory1d_code(nlayers, field_1_w1, "
+ code = fortran_writer(kernel.gen_stub)
+ assert (
+ " subroutine testkern_stencil_xory1d_code(nlayers, field_1_w1, "
"field_2_w2, field_2_stencil_size, field_2_direction, "
"field_2_stencil_dofmap, field_3_w2, field_4_w3, ndf_w1, undf_w1, "
- "map_w1, ndf_w2, undf_w2, map_w2, ndf_w3, undf_w3, map_w3)")
- assert result1 in generated_code
- result2 = (
- " INTEGER(KIND=i_def), intent(in) :: field_2_stencil_size\n"
- " INTEGER(KIND=i_def), intent(in) :: field_2_direction\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_w2,field_2_stencil_size) :: field_2_stencil_dofmap")
- assert result2 in generated_code
+ "map_w1, ndf_w2, undf_w2, map_w2, ndf_w3, undf_w3, map_w3)" in code)
+ assert "integer(kind=i_def), intent(in) :: field_2_stencil_size\n" in code
+ assert "integer(kind=i_def), intent(in) :: field_2_direction\n" in code
+ assert ("integer(kind=i_def), dimension(ndf_w2,field_2_stencil_size), "
+ "intent(in) :: field_2_stencil_dofmap" in code)
-def test_stub_stencil_vector():
+def test_stub_stencil_vector(fortran_writer):
'''
Check that correct stub code is produced when there is a stencil
access which is a vector
@@ -142,22 +138,19 @@ def test_stub_stencil_vector():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
- result1 = (
- " SUBROUTINE testkern_stencil_vector_code(nlayers, field_1_w0_v1, "
+ code = fortran_writer(kernel.gen_stub)
+ assert (
+ " subroutine testkern_stencil_vector_code(nlayers, field_1_w0_v1, "
"field_1_w0_v2, field_1_w0_v3, field_2_w3_v1, field_2_w3_v2, "
"field_2_w3_v3, field_2_w3_v4, field_2_stencil_size, "
"field_2_stencil_dofmap, ndf_w0, undf_w0, map_w0, ndf_w3, undf_w3, "
- "map_w3)")
- assert result1 in generated_code
- result2 = (
- " INTEGER(KIND=i_def), intent(in) :: field_2_stencil_size\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_w3,field_2_stencil_size) :: field_2_stencil_dofmap")
- assert result2 in generated_code
+ "map_w3)" in code)
+ assert "integer(kind=i_def), intent(in) :: field_2_stencil_size\n" in code
+ assert ("integer(kind=i_def), dimension(ndf_w3,field_2_stencil_size), "
+ "intent(in) :: field_2_stencil_dofmap" in code)
-def test_stub_stencil_multi():
+def test_stub_stencil_multi(fortran_writer):
'''
Check that correct stub code is produced when there are multiple
stencils
@@ -168,27 +161,25 @@ def test_stub_stencil_multi():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
- result1 = (
- " SUBROUTINE testkern_stencil_multi_code(nlayers, field_1_w1, "
+ code = fortran_writer(kernel.gen_stub)
+ assert (
+ " subroutine testkern_stencil_multi_code(nlayers, field_1_w1, "
"field_2_w2, field_2_stencil_size, field_2_stencil_dofmap, field_3_w2,"
" field_3_stencil_size, field_3_direction, field_3_stencil_dofmap, "
"field_4_w3, field_4_stencil_size, field_4_stencil_dofmap, ndf_w1, "
- "undf_w1, map_w1, ndf_w2, undf_w2, map_w2, ndf_w3, undf_w3, map_w3)")
- assert result1 in generated_code
- result2 = (
- " REAL(KIND=r_def), intent(in), dimension(undf_w2) :: "
- "field_3_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w3) :: "
- "field_4_w3\n"
- " INTEGER(KIND=i_def), intent(in) :: field_2_stencil_size, "
- "field_3_stencil_size, field_4_stencil_size\n"
- " INTEGER(KIND=i_def), intent(in) :: field_3_direction\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_w2,field_2_stencil_size) :: field_2_stencil_dofmap\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_w2,field_3_stencil_size) :: field_3_stencil_dofmap\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_w3,field_4_stencil_size) :: field_4_stencil_dofmap")
-
- assert result2 in generated_code
+ "undf_w1, map_w1, ndf_w2, undf_w2, map_w2, ndf_w3, undf_w3, map_w3)"
+ in code)
+ assert ("real(kind=r_def), dimension(undf_w2), intent(in) "
+ ":: field_3_w2\n" in code)
+ assert ("real(kind=r_def), dimension(undf_w3), intent(in) :: "
+ "field_4_w3\n" in code)
+ assert "integer(kind=i_def), intent(in) :: field_2_stencil_size" in code
+ assert "integer(kind=i_def), intent(in) :: field_3_stencil_size" in code
+ assert "integer(kind=i_def), intent(in) :: field_4_stencil_size" in code
+ assert "integer(kind=i_def), intent(in) :: field_3_direction\n" in code
+ assert ("integer(kind=i_def), dimension(ndf_w2,field_2_stencil_size), "
+ "intent(in) :: field_2_stencil_dofmap\n" in code)
+ assert ("integer(kind=i_def), dimension(ndf_w2,field_3_stencil_size), "
+ "intent(in) :: field_3_stencil_dofmap\n" in code)
+ assert ("integer(kind=i_def), dimension(ndf_w3,field_4_stencil_size), "
+ "intent(in) :: field_4_stencil_dofmap" in code)
diff --git a/src/psyclone/tests/domain/lfric/lfric_stencil_test.py b/src/psyclone/tests/domain/lfric/lfric_stencil_test.py
index 26d8e459b5..246d47cae6 100644
--- a/src/psyclone/tests/domain/lfric/lfric_stencil_test.py
+++ b/src/psyclone/tests/domain/lfric/lfric_stencil_test.py
@@ -48,7 +48,6 @@
LFRicKernMetadata, LFRicStencils)
from psyclone.dynamo0p3 import DynKernelArguments
from psyclone.errors import GenerationError, InternalError
-from psyclone.f2pygen import ModuleGen
from psyclone.parse.algorithm import parse
from psyclone.parse.utils import ParseError
from psyclone.psyGen import PSyFactory
@@ -225,7 +224,7 @@ def test_single_kernel_any_dscnt_space_stencil(dist_mem, tmpdir):
# Use the same stencil dofmap
output1 = (
- " CALL testkern_same_any_dscnt_space_stencil_code("
+ " call testkern_same_any_dscnt_space_stencil_code("
"nlayers_f0, f0_data, f1_data, f1_stencil_size(cell), "
"f1_stencil_dofmap(:,:,cell), f2_data, f1_stencil_size(cell), "
"f1_stencil_dofmap(:,:,cell), ndf_wtheta, undf_wtheta, "
@@ -234,7 +233,7 @@ def test_single_kernel_any_dscnt_space_stencil(dist_mem, tmpdir):
assert output1 in result
# Use a different stencil dofmap
output2 = (
- " CALL testkern_different_any_dscnt_space_stencil_code("
+ " call testkern_different_any_dscnt_space_stencil_code("
"nlayers_f3, f3_data, f4_data, f4_stencil_size(cell), "
"f4_stencil_dofmap(:,:,cell), f5_data, f5_stencil_size(cell), "
"f5_stencil_dofmap(:,:,cell), ndf_wtheta, undf_wtheta, "
@@ -251,8 +250,8 @@ def test_single_kernel_any_dscnt_space_stencil(dist_mem, tmpdir):
assert "halo_exchange(depth=extent)" not in result
assert "loop0_stop = f0_proxy%vspace%get_ncell()" in result
assert "loop1_stop = f3_proxy%vspace%get_ncell()" in result
- assert "DO cell = loop0_start, loop0_stop" in result
- assert "DO cell = loop1_start, loop1_stop" in result
+ assert "do cell = loop0_start, loop0_stop" in result
+ assert "do cell = loop1_start, loop1_stop" in result
def test_stencil_args_unique_1(dist_mem, tmpdir):
@@ -269,43 +268,37 @@ def test_stencil_args_unique_1(dist_mem, tmpdir):
distributed_memory=dist_mem).create(invoke_info)
result = str(psy.gen)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
# we use f2_stencil_size for extent and nlayers_f1 for direction
# as arguments
- output1 = (" SUBROUTINE invoke_0_testkern_stencil_xory1d_type(f1, "
- "f2, f3, f4, f2_stencil_size, nlayers_f1)")
- assert output1 in result
- output2 = (" INTEGER(KIND=i_def), intent(in) :: f2_stencil_size\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers_f1")
- assert output2 in result
- output3 = (" INTEGER(KIND=i_def), pointer :: f2_stencil_size_1(:)"
- " => null()")
- assert output3 in result
+ assert (" subroutine invoke_0_testkern_stencil_xory1d_type(f1, "
+ "f2, f3, f4, f2_stencil_size, nlayers_f1)" in result)
+ assert "integer(kind=i_def), intent(in) :: f2_stencil_size\n" in result
+ assert "integer(kind=i_def), intent(in) :: nlayers_f1\n" in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: "
+ "f2_stencil_size_1 => null()" in result)
# therefore the local variable is now declared as nlayers_f1_1"
- output4 = " INTEGER(KIND=i_def) nlayers_f1_1"
- assert output4 in result
- output5 = " nlayers_f1_1 = f1_proxy%vspace%get_nlayers()"
- assert output5 in result
+ assert "integer(kind=i_def) :: nlayers_f1_1" in result
+ assert "nlayers_f1_1 = f1_proxy%vspace%get_nlayers()" in result
output6 = (
- " IF (nlayers_f1 .eq. x_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,f2_stencil_size)\n"
- " END IF\n"
- " IF (nlayers_f1 .eq. y_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,f2_stencil_size)\n"
- " END IF\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size_1 => f2_stencil_map%get_stencil_sizes()")
+ " if (nlayers_f1 == x_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, f2_stencil_size)\n"
+ " end if\n"
+ " if (nlayers_f1 == y_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, f2_stencil_size)\n"
+ " end if\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size_1 => f2_stencil_map%get_stencil_sizes()")
assert output6 in result
output7 = (
- " CALL testkern_stencil_xory1d_code(nlayers_f1_1, "
+ " call testkern_stencil_xory1d_code(nlayers_f1_1, "
"f1_data, f2_data, f2_stencil_size_1(cell), nlayers_f1, "
"f2_stencil_dofmap(:,:,cell), f3_data, f4_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, "
"map_w2(:,cell), ndf_w3, undf_w3, map_w3(:,cell))")
assert output7 in result
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_stencil_args_unique_2(dist_mem, tmpdir):
@@ -323,42 +316,41 @@ def test_stencil_args_unique_2(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- output1 = (" SUBROUTINE invoke_0(f1, f2, f3, f4, f2_info, "
- "f2_info_2, f2_info_1, f2_info_3)")
- assert output1 in result
- output2 = (
- " INTEGER(KIND=i_def), intent(in) :: f2_info, f2_info_2\n"
- " INTEGER(KIND=i_def), intent(in) :: f2_info_1, f2_info_3")
- assert output2 in result
+ assert (" subroutine invoke_0(f1, f2, f3, f4, f2_info, "
+ "f2_info_2, f2_info_1, f2_info_3)" in result)
+ assert "integer(kind=i_def), intent(in) :: f2_info\n" in result
+ assert "integer(kind=i_def), intent(in) :: f2_info_2\n" in result
+ assert "integer(kind=i_def), intent(in) :: f2_info_1\n" in result
+ assert "integer(kind=i_def), intent(in) :: f2_info_3\n" in result
output3 = (
- " IF (f2_info_1 .eq. x_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,f2_info)\n"
- " END IF\n"
- " IF (f2_info_1 .eq. y_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,f2_info)\n"
- " END IF\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " IF (f2_info_3 .eq. x_direction) THEN\n"
- " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,f2_info_2)\n"
- " END IF\n"
- " IF (f2_info_3 .eq. y_direction) THEN\n"
- " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,f2_info_2)\n"
- " END IF")
+ " if (f2_info_1 == x_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, f2_info)\n"
+ " end if\n"
+ " if (f2_info_1 == y_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, f2_info)\n"
+ " end if\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ " if (f2_info_3 == x_direction) then\n"
+ " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, f2_info_2)\n"
+ " end if\n"
+ " if (f2_info_3 == y_direction) then\n"
+ " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, f2_info_2)\n"
+ " end if")
assert output3 in result
output4 = (
- " CALL testkern_stencil_xory1d_code(nlayers_f1, "
+ " call testkern_stencil_xory1d_code(nlayers_f1, "
"f1_data, f2_data, f2_stencil_size(cell), f2_info_1, "
"f2_stencil_dofmap(:,:,cell), f3_data, f4_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, "
"map_w2(:,cell), ndf_w3, undf_w3, map_w3(:,cell))")
assert output4 in result
output5 = (
- " CALL testkern_stencil_xory1d_code(nlayers_f1, "
+ " call testkern_stencil_xory1d_code(nlayers_f1, "
"f1_data, f2_data, f2_stencil_size_1(cell), f2_info_3, "
"f2_stencil_dofmap_1(:,:,cell), f3_data, f4_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, "
@@ -366,15 +358,15 @@ def test_stencil_args_unique_2(dist_mem, tmpdir):
assert output5 in result
if dist_mem:
assert (
- "IF (f2_proxy%is_dirty(depth=MAX(f2_info + 1, "
- "f2_info_2 + 1))) THEN" in result)
+ "if (f2_proxy%is_dirty(depth=MAX(f2_info + 1, "
+ "f2_info_2 + 1))) then" in result)
assert (
- "CALL f2_proxy%halo_exchange(depth=MAX(f2_info + 1, "
+ "call f2_proxy%halo_exchange(depth=MAX(f2_info + 1, "
"f2_info_2 + 1))" in result)
- assert "IF (f3_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL f3_proxy%halo_exchange(depth=1)" in result
- assert "IF (f4_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL f4_proxy%halo_exchange(depth=1)" in result
+ assert "if (f3_proxy%is_dirty(depth=1)) then" in result
+ assert "call f3_proxy%halo_exchange(depth=1)" in result
+ assert "if (f4_proxy%is_dirty(depth=1)) then" in result
+ assert "call f4_proxy%halo_exchange(depth=1)" in result
def test_stencil_args_unique_3(dist_mem, tmpdir):
@@ -393,26 +385,22 @@ def test_stencil_args_unique_3(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
+ assert "integer(kind=i_def), intent(in) :: my_info_f2_info\n" in result
+ assert "integer(kind=i_def), intent(in) :: my_info_f2_info_1\n" in result
assert (
- " INTEGER(KIND=i_def), intent(in) :: my_info_f2_info, "
- "my_info_f2_info_2\n"
- " INTEGER(KIND=i_def), intent(in) :: my_info_f2_info_1, "
- "my_info_f2_info_3\n"
- in result)
- assert (
- "f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap(STENCIL_1DX,"
+ "f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap(STENCIL_1DX, "
"my_info_f2_info)" in result)
if dist_mem:
assert (
- "IF (f2_proxy%is_dirty(depth=MAX(my_info_f2_info + 1, "
- "my_info_f2_info_2 + 1))) THEN" in result)
+ "if (f2_proxy%is_dirty(depth=MAX(my_info_f2_info + 1, "
+ "my_info_f2_info_2 + 1))) then" in result)
assert (
- "CALL f2_proxy%halo_exchange(depth=MAX(my_info_f2_info + 1, "
+ "call f2_proxy%halo_exchange(depth=MAX(my_info_f2_info + 1, "
"my_info_f2_info_2 + 1))" in result)
- assert "IF (f3_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL f3_proxy%halo_exchange(depth=1)" in result
- assert "IF (f4_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL f4_proxy%halo_exchange(depth=1)" in result
+ assert "if (f3_proxy%is_dirty(depth=1)) then" in result
+ assert "call f3_proxy%halo_exchange(depth=1)" in result
+ assert "if (f4_proxy%is_dirty(depth=1)) then" in result
+ assert "call f4_proxy%halo_exchange(depth=1)" in result
def test_stencil_vector(dist_mem, tmpdir):
@@ -429,23 +417,19 @@ def test_stencil_vector(dist_mem, tmpdir):
psy = PSyFactory(TEST_API,
distributed_memory=dist_mem).create(invoke_info)
result = str(psy.gen)
+ assert ("use stencil_dofmap_mod, only : STENCIL_CROSS, "
+ "stencil_dofmap_type\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: "
+ "f2_stencil_size => null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:,:,:) :: "
+ "f2_stencil_dofmap => null()\n")
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
assert (
- " USE stencil_dofmap_mod, ONLY: STENCIL_CROSS\n"
- " USE stencil_dofmap_mod, ONLY: stencil_dofmap_type\n") \
- in str(result)
- assert (
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:)"
- " => null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n") \
- in str(result)
- assert (
- " f2_stencil_map => f2_proxy(1)%vspace%get_stencil_dofmap"
- "(STENCIL_CROSS,f2_extent)\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ " f2_stencil_map => f2_proxy(1)%vspace%get_stencil_dofmap"
+ "(STENCIL_CROSS, f2_extent)\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
) in str(result)
assert (
"f2_1_data, f2_2_data, f2_3_data, "
@@ -467,36 +451,33 @@ def test_stencil_xory_vector(dist_mem, tmpdir):
psy = PSyFactory(TEST_API,
distributed_memory=dist_mem).create(invoke_info)
result = str(psy.gen)
+ assert (" use stencil_dofmap_mod, only : STENCIL_1DX, STENCIL_1DY, "
+ "stencil_dofmap_type\n" in result)
+ assert (" use flux_direction_mod, only : x_direction, y_direction\n"
+ in result)
assert (
- " USE stencil_dofmap_mod, ONLY: STENCIL_1DX, STENCIL_1DY\n"
- " USE flux_direction_mod, ONLY: x_direction, y_direction\n"
- " USE stencil_dofmap_mod, ONLY: stencil_dofmap_type\n") \
- in result
- assert (
- " INTEGER(KIND=i_def), intent(in) :: f2_extent\n"
- " INTEGER(KIND=i_def), intent(in) :: f2_direction\n") \
- in result
- assert (
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:)"
- " => null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n") \
+ " integer(kind=i_def), intent(in) :: f2_extent\n"
+ " integer(kind=i_def), intent(in) :: f2_direction\n") \
in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size => "
+ "null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:,:,:) :: "
+ "f2_stencil_dofmap => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
assert (
- " IF (f2_direction .eq. x_direction) THEN\n"
- " f2_stencil_map => "
+ " if (f2_direction == x_direction) then\n"
+ " f2_stencil_map => "
"f2_proxy(1)%vspace%get_stencil_dofmap"
- "(STENCIL_1DX,f2_extent)\n"
- " END IF\n"
- " IF (f2_direction .eq. y_direction) THEN\n"
- " f2_stencil_map => "
+ "(STENCIL_1DX, f2_extent)\n"
+ " end if\n"
+ " if (f2_direction == y_direction) then\n"
+ " f2_stencil_map => "
"f2_proxy(1)%vspace%get_stencil_dofmap"
- "(STENCIL_1DY,f2_extent)\n"
- " END IF\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ "(STENCIL_1DY, f2_extent)\n"
+ " end if\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
) in result
assert (
"f2_1_data, f2_2_data, f2_3_data, "
@@ -657,32 +638,29 @@ def test_single_stencil_extent(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
output1 = (
- "SUBROUTINE invoke_0_testkern_stencil_type(f1, f2, f3, f4, "
+ "subroutine invoke_0_testkern_stencil_type(f1, f2, f3, f4, "
"f2_extent)")
assert output1 in result
- assert "USE stencil_dofmap_mod, ONLY: stencil_dofmap_type\n" in result
- assert "USE stencil_dofmap_mod, ONLY: STENCIL_CROSS\n" in result
- output3 = (" INTEGER(KIND=i_def), intent(in) :: f2_extent\n")
- assert output3 in result
- output4 = (
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n")
- assert output4 in result
+ assert ("use stencil_dofmap_mod, only : STENCIL_CROSS, "
+ "stencil_dofmap_type\n" in result)
+ assert "integer(kind=i_def), intent(in) :: f2_extent\n" in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size"
+ " => null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:,:,:) :: "
+ "f2_stencil_dofmap => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
output5 = (
- " !\n"
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,f2_extent)\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ "\n"
+ " ! Initialise stencil dofmaps\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, f2_extent)\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output5 in result
output6 = (
- " CALL testkern_stencil_code(nlayers_f1, f1_data,"
+ " call testkern_stencil_code(nlayers_f1, f1_data,"
" f2_data, f2_stencil_size(cell), f2_stencil_dofmap(:,:,cell),"
" f3_data, f4_data, ndf_w1, undf_w1, map_w1(:,cell), "
"ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3, "
@@ -706,43 +684,38 @@ def test_single_stencil_xory1d(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
output1 = (
- " SUBROUTINE invoke_0_testkern_stencil_xory1d_type(f1, f2, f3, "
+ " subroutine invoke_0_testkern_stencil_xory1d_type(f1, f2, f3, "
"f4, f2_extent, f2_direction)")
assert output1 in result
- output2 = (
- " USE stencil_dofmap_mod, ONLY: STENCIL_1DX, STENCIL_1DY\n"
- " USE flux_direction_mod, ONLY: x_direction, y_direction\n"
- " USE stencil_dofmap_mod, ONLY: stencil_dofmap_type\n")
- assert output2 in result
+ assert ("use stencil_dofmap_mod, only : STENCIL_1DX, STENCIL_1DY, "
+ "stencil_dofmap_type\n" in result)
+ assert ("use flux_direction_mod, only : x_direction, y_direction\n"
+ in result)
output3 = (
- " INTEGER(KIND=i_def), intent(in) :: f2_extent\n"
- " INTEGER(KIND=i_def), intent(in) :: f2_direction\n")
+ " integer(kind=i_def), intent(in) :: f2_extent\n"
+ " integer(kind=i_def), intent(in) :: f2_direction\n")
assert output3 in result
- output4 = (
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n")
- assert output4 in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: "
+ "f2_stencil_size => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
output5 = (
- " !\n"
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " IF (f2_direction .eq. x_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,f2_extent)\n"
- " END IF\n"
- " IF (f2_direction .eq. y_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,f2_extent)\n"
- " END IF\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ "\n"
+ " ! Initialise stencil dofmaps\n"
+ " if (f2_direction == x_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, f2_extent)\n"
+ " end if\n"
+ " if (f2_direction == y_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, f2_extent)\n"
+ " end if\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output5 in result
output6 = (
- " CALL testkern_stencil_xory1d_code(nlayers_f1, "
+ " call testkern_stencil_xory1d_code(nlayers_f1, "
"f1_data, f2_data, f2_stencil_size(cell), f2_direction, "
"f2_stencil_dofmap(:,:,cell), f3_data, f4_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, "
@@ -762,38 +735,32 @@ def test_single_stencil_literal(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- output1 = (" SUBROUTINE invoke_0_testkern_stencil_type(f1, f2, "
+ output1 = (" subroutine invoke_0_testkern_stencil_type(f1, f2, "
"f3, f4)")
assert output1 in result
- output2 = (
- " USE stencil_dofmap_mod, ONLY: STENCIL_CROSS\n"
- " USE stencil_dofmap_mod, ONLY: stencil_dofmap_type\n")
- assert output2 in result
- output3 = (
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n")
- assert output3 in result
+ assert ("use stencil_dofmap_mod, only : STENCIL_CROSS, "
+ "stencil_dofmap_type\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
output4 = (
- " !\n"
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,1)\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ "\n"
+ " ! Initialise stencil dofmaps\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, 1)\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output4 in result
if dist_mem:
output5 = (
- " IF (f2_proxy%is_dirty(depth=2)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=2)\n"
- " END IF\n")
+ " if (f2_proxy%is_dirty(depth=2)) then\n"
+ " call f2_proxy%halo_exchange(depth=2)\n"
+ " end if\n")
assert output5 in result
output6 = (
- " CALL testkern_stencil_code(nlayers_f1, f1_data, "
+ " call testkern_stencil_code(nlayers_f1, f1_data, "
"f2_data, f2_stencil_size(cell), f2_stencil_dofmap(:,:,cell), "
"f3_data, f4_data, ndf_w1, undf_w1, map_w1(:,cell), "
"ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3, "
@@ -813,37 +780,32 @@ def test_stencil_region(dist_mem, tmpdir):
result = str(psy.gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
- output1 = (" SUBROUTINE invoke_0_testkern_stencil_region_type(f1, f2, "
+ output1 = (" subroutine invoke_0_testkern_stencil_region_type(f1, f2, "
"f3, f4, f2_extent)")
assert output1 in result
- output2 = (" USE stencil_dofmap_mod, ONLY: STENCIL_REGION\n"
- " USE stencil_dofmap_mod, ONLY: stencil_dofmap_type\n")
- assert output2 in result
- output3 = (
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n")
- assert output3 in result
+ assert ("use stencil_dofmap_mod, only : STENCIL_REGION, "
+ "stencil_dofmap_type\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size "
+ "=> null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
output4 = (
- " !\n"
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_REGION,f2_extent)\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ "\n"
+ " ! Initialise stencil dofmaps\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_REGION, f2_extent)\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output4 in result
if dist_mem:
output5 = (
- " IF (f2_proxy%is_dirty(depth=f2_extent + 1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=f2_extent + 1)\n"
- " END IF\n")
+ " if (f2_proxy%is_dirty(depth=f2_extent + 1)) then\n"
+ " call f2_proxy%halo_exchange(depth=f2_extent + 1)\n"
+ " end if\n")
assert output5 in result
output6 = (
- " CALL testkern_stencil_region_code(nlayers_f1, f1_data, "
+ " call testkern_stencil_region_code(nlayers_f1, f1_data, "
"f2_data, f2_stencil_size(cell), f2_stencil_dofmap(:,:,cell), "
"f3_data, f4_data, ndf_w1, undf_w1, map_w1(:,cell), "
"ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3, "
@@ -866,37 +828,29 @@ def test_single_stencil_cross2d(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- output1 = (
- "SUBROUTINE invoke_0_testkern_stencil_cross2d_type(f1, f2, f3, f4, "
- "f2_extent)")
- assert output1 in result
- output2 = ("USE stencil_2D_dofmap_mod, ONLY: stencil_2D_dofmap_type, "
- "STENCIL_2D_CROSS\n")
- assert output2 in result
- output3 = (" INTEGER(KIND=i_def), intent(in) :: f2_extent\n")
- assert output3 in result
- output4 = (
- " INTEGER(KIND=i_def) f2_max_branch_length\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:,:) => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:,:) => "
- "null()\n"
- " TYPE(stencil_2D_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n")
- assert output4 in result
+ assert (
+ "subroutine invoke_0_testkern_stencil_cross2d_type(f1, f2, f3, f4, "
+ "f2_extent)" in result)
+ assert ("use stencil_2D_dofmap_mod, only : STENCIL_2D_CROSS, "
+ "stencil_2D_dofmap_type\n" in result)
+ assert "integer(kind=i_def), intent(in) :: f2_extent\n" in result
+ assert "integer(kind=i_def) :: f2_max_branch_length\n" in result
+ assert ("integer(kind=i_def), pointer, dimension(:,:) :: "
+ "f2_stencil_size => null()\n" in result)
+ assert ("type(stencil_2D_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
output5 = (
- " !\n"
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_2D_dofmap("
- "STENCIL_2D_CROSS,f2_extent)\n"
- " f2_max_branch_length = f2_extent + 1_i_def\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ "\n"
+ " ! Initialise stencil dofmaps\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_2D_dofmap("
+ "STENCIL_2D_CROSS, f2_extent)\n"
+ " f2_max_branch_length = f2_extent + 1\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output5 in result
output6 = (
- " CALL testkern_stencil_cross2d_code(nlayers_f1, f1_data,"
+ " call testkern_stencil_cross2d_code(nlayers_f1, f1_data,"
" f2_data, f2_stencil_size(:,cell), f2_max_branch_length,"
" f2_stencil_dofmap(:,:,:,cell), f3_data, f4_data,"
" ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell),"
@@ -918,44 +872,39 @@ def test_single_stencil_xory1d_literal(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- output1 = (" SUBROUTINE invoke_0_testkern_stencil_xory1d_type("
+ output1 = (" subroutine invoke_0_testkern_stencil_xory1d_type("
"f1, f2, f3, f4)")
assert output1 in result
- output2 = (
- " USE stencil_dofmap_mod, ONLY: STENCIL_1DX, STENCIL_1DY\n"
- " USE flux_direction_mod, ONLY: x_direction, y_direction\n"
- " USE stencil_dofmap_mod, ONLY: stencil_dofmap_type\n")
- assert output2 in result
- output3 = (
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n")
- assert output3 in result
+ assert ("use stencil_dofmap_mod, only : STENCIL_1DX, STENCIL_1DY, "
+ "stencil_dofmap_type\n" in result)
+ assert ("use flux_direction_mod, only : x_direction, y_direction\n"
+ in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size "
+ "=> null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
output4 = (
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " IF (x_direction .eq. x_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,2)\n"
- " END IF\n"
- " IF (x_direction .eq. y_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,2)\n"
- " END IF\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ " ! Initialise stencil dofmaps\n"
+ " if (x_direction == x_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, 2)\n"
+ " end if\n"
+ " if (x_direction == y_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, 2)\n"
+ " end if\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output4 in result
if dist_mem:
output5 = (
- " IF (f2_proxy%is_dirty(depth=3)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=3)\n"
- " END IF\n")
+ " if (f2_proxy%is_dirty(depth=3)) then\n"
+ " call f2_proxy%halo_exchange(depth=3)\n"
+ " end if\n")
assert output5 in result
output6 = (
- " CALL testkern_stencil_xory1d_code(nlayers_f1, "
+ " call testkern_stencil_xory1d_code(nlayers_f1, "
"f1_data, f2_data, f2_stencil_size(cell), x_direction, "
"f2_stencil_dofmap(:,:,cell), f3_data, f4_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, "
@@ -979,44 +928,39 @@ def test_single_stencil_xory1d_literal_mixed(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- output1 = (" SUBROUTINE invoke_0_testkern_stencil_xory1d_type("
+ output1 = (" subroutine invoke_0_testkern_stencil_xory1d_type("
"f1, f2, f3, f4)")
assert output1 in result
- output2 = (
- " USE stencil_dofmap_mod, ONLY: STENCIL_1DX, STENCIL_1DY\n"
- " USE flux_direction_mod, ONLY: x_direction, y_direction\n"
- " USE stencil_dofmap_mod, ONLY: stencil_dofmap_type\n")
- assert output2 in result
- output3 = (
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n")
- assert output3 in result
+ assert ("use stencil_dofmap_mod, only : STENCIL_1DX, STENCIL_1DY, "
+ "stencil_dofmap_type\n" in result)
+ assert ("use flux_direction_mod, only : x_direction, y_direction\n"
+ in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size "
+ "=> null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
output4 = (
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " IF (x_direction .eq. x_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,2)\n"
- " END IF\n"
- " IF (x_direction .eq. y_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,2)\n"
- " END IF\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ " ! Initialise stencil dofmaps\n"
+ " if (x_direction == x_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, 2)\n"
+ " end if\n"
+ " if (x_direction == y_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, 2)\n"
+ " end if\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output4 in result
if dist_mem:
output5 = (
- " IF (f2_proxy%is_dirty(depth=3)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=3)\n"
- " END IF\n")
+ " if (f2_proxy%is_dirty(depth=3)) then\n"
+ " call f2_proxy%halo_exchange(depth=3)\n"
+ " end if\n")
assert output5 in result
output6 = (
- " CALL testkern_stencil_xory1d_code(nlayers_f1, "
+ " call testkern_stencil_xory1d_code(nlayers_f1, "
"f1_data, f2_data, f2_stencil_size(cell), x_direction, "
"f2_stencil_dofmap(:,:,cell), f3_data, f4_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, "
@@ -1037,70 +981,63 @@ def test_multiple_stencils(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
output1 = (
- " SUBROUTINE invoke_0_testkern_stencil_multi_type(f1, f2, f3, "
+ " subroutine invoke_0_testkern_stencil_multi_type(f1, f2, f3, "
"f4, f2_extent, f3_extent, f3_direction)")
assert output1 in result
- output2 = (
- " USE stencil_dofmap_mod, ONLY: STENCIL_1DX, STENCIL_1DY\n"
- " USE flux_direction_mod, ONLY: x_direction, y_direction\n"
- " USE stencil_dofmap_mod, ONLY: STENCIL_CROSS\n"
- " USE stencil_dofmap_mod, ONLY: stencil_dofmap_type\n")
- assert output2 in result
+ assert ("use stencil_dofmap_mod, only : STENCIL_1DX, STENCIL_1DY, "
+ "STENCIL_CROSS, stencil_dofmap_type\n" in result)
+ assert ("use flux_direction_mod, only : x_direction, y_direction\n"
+ in result)
output3 = (
- " INTEGER(KIND=i_def), intent(in) :: f2_extent, f3_extent\n"
- " INTEGER(KIND=i_def), intent(in) :: f3_direction\n")
+ " integer(kind=i_def), intent(in) :: f2_extent\n"
+ " integer(kind=i_def), intent(in) :: f3_extent\n"
+ " integer(kind=i_def), intent(in) :: f3_direction\n")
assert output3 in result
- output4 = (
- " INTEGER(KIND=i_def), pointer :: f4_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f4_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f4_stencil_map => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f3_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f3_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f3_stencil_map => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n")
- assert output4 in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f4_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f4_stencil_map => "
+ "null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f3_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f3_stencil_map => "
+ "null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size "
+ "=> null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
output5 = (
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,f2_extent)\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " IF (f3_direction .eq. x_direction) THEN\n"
- " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,f3_extent)\n"
- " END IF\n"
- " IF (f3_direction .eq. y_direction) THEN\n"
- " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,f3_extent)\n"
- " END IF\n"
- " f3_stencil_dofmap => f3_stencil_map%get_whole_dofmap()\n"
- " f3_stencil_size => f3_stencil_map%get_stencil_sizes()\n"
- " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,1)\n"
- " f4_stencil_dofmap => f4_stencil_map%get_whole_dofmap()\n"
- " f4_stencil_size => f4_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ " ! Initialise stencil dofmaps\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, f2_extent)\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ " if (f3_direction == x_direction) then\n"
+ " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, f3_extent)\n"
+ " end if\n"
+ " if (f3_direction == y_direction) then\n"
+ " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, f3_extent)\n"
+ " end if\n"
+ " f3_stencil_dofmap => f3_stencil_map%get_whole_dofmap()\n"
+ " f3_stencil_size => f3_stencil_map%get_stencil_sizes()\n"
+ " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, 1)\n"
+ " f4_stencil_dofmap => f4_stencil_map%get_whole_dofmap()\n"
+ " f4_stencil_size => f4_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output5 in result
if dist_mem:
output6 = (
- " IF (f2_proxy%is_dirty(depth=f2_extent + 1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=f2_extent + 1)\n"
- " END IF\n"
- " IF (f3_proxy%is_dirty(depth=f3_extent + 1)) THEN\n"
- " CALL f3_proxy%halo_exchange(depth=f3_extent + 1)\n"
- " END IF\n")
+ " if (f2_proxy%is_dirty(depth=f2_extent + 1)) then\n"
+ " call f2_proxy%halo_exchange(depth=f2_extent + 1)\n"
+ " end if\n"
+ " if (f3_proxy%is_dirty(depth=f3_extent + 1)) then\n"
+ " call f3_proxy%halo_exchange(depth=f3_extent + 1)\n"
+ " end if\n")
assert output6 in result
output7 = (
- " CALL testkern_stencil_multi_code(nlayers_f1, f1_data, "
+ " call testkern_stencil_multi_code(nlayers_f1, f1_data, "
"f2_data, f2_stencil_size(cell), f2_stencil_dofmap(:,:,cell), "
"f3_data, f3_stencil_size(cell), f3_direction, "
"f3_stencil_dofmap(:,:,cell), f4_data, f4_stencil_size(cell), "
@@ -1122,78 +1059,70 @@ def test_multiple_stencils_int_field(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- output1 = (
- " USE integer_field_mod, ONLY: integer_field_type, "
- "integer_field_proxy_type\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_testkern_stencil_multi_int_field_type(f1, "
- "f2, f3, f4, f2_extent, f3_extent, f3_direction)")
- assert output1 in result
- output2 = (
- " USE stencil_dofmap_mod, ONLY: STENCIL_1DX, STENCIL_1DY\n"
- " USE flux_direction_mod, ONLY: x_direction, y_direction\n"
- " USE stencil_dofmap_mod, ONLY: STENCIL_CROSS\n"
- " USE stencil_dofmap_mod, ONLY: stencil_dofmap_type\n"
- " TYPE(integer_field_type), intent(in) :: f1, f2, f3, f4\n"
- " INTEGER(KIND=i_def), intent(in) :: f2_extent, f3_extent\n"
- " INTEGER(KIND=i_def), intent(in) :: f3_direction\n")
- assert output2 in result
- output3 = (
- " TYPE(integer_field_proxy_type) f1_proxy, f2_proxy, "
- "f3_proxy, f4_proxy\n")
- assert output3 in result
- output4 = (
- " INTEGER(KIND=i_def), pointer :: f4_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f4_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f4_stencil_map => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f3_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f3_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f3_stencil_map => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n")
- assert output4 in result
+ assert ("use integer_field_mod, only : integer_field_proxy_type, "
+ "integer_field_type\n" in result)
+ assert (" subroutine invoke_0_testkern_stencil_multi_int_field_type(f1, "
+ "f2, f3, f4, f2_extent, f3_extent, f3_direction)" in result)
+ assert ("use stencil_dofmap_mod, only : STENCIL_1DX, STENCIL_1DY, "
+ "STENCIL_CROSS, stencil_dofmap_type\n" in result)
+ assert ("use flux_direction_mod, only : x_direction, y_direction\n"
+ in result)
+ assert "type(integer_field_type), intent(in) :: f1\n" in result
+ assert "type(integer_field_type), intent(in) :: f2\n" in result
+ assert "type(integer_field_type), intent(in) :: f3\n" in result
+ assert "type(integer_field_type), intent(in) :: f4\n" in result
+ assert "integer(kind=i_def), intent(in) :: f2_extent\n" in result
+ assert "integer(kind=i_def), intent(in) :: f3_extent\n" in result
+ assert "integer(kind=i_def), intent(in) :: f3_direction\n" in result
+ assert "type(integer_field_proxy_type) :: f1_proxy\n" in result
+ assert "type(integer_field_proxy_type) :: f2_proxy\n" in result
+ assert "type(integer_field_proxy_type) :: f3_proxy\n" in result
+ assert "type(integer_field_proxy_type) :: f4_proxy\n" in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f4_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f4_stencil_map => "
+ "null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f3_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f3_stencil_map => "
+ "null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
output5 = (
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,f2_extent)\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " IF (f3_direction .eq. x_direction) THEN\n"
- " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,f3_extent)\n"
- " END IF\n"
- " IF (f3_direction .eq. y_direction) THEN\n"
- " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,f3_extent)\n"
- " END IF\n"
- " f3_stencil_dofmap => f3_stencil_map%get_whole_dofmap()\n"
- " f3_stencil_size => f3_stencil_map%get_stencil_sizes()\n"
- " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,2)\n"
- " f4_stencil_dofmap => f4_stencil_map%get_whole_dofmap()\n"
- " f4_stencil_size => f4_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ " ! Initialise stencil dofmaps\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, f2_extent)\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ " if (f3_direction == x_direction) then\n"
+ " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, f3_extent)\n"
+ " end if\n"
+ " if (f3_direction == y_direction) then\n"
+ " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, f3_extent)\n"
+ " end if\n"
+ " f3_stencil_dofmap => f3_stencil_map%get_whole_dofmap()\n"
+ " f3_stencil_size => f3_stencil_map%get_stencil_sizes()\n"
+ " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, 2)\n"
+ " f4_stencil_dofmap => f4_stencil_map%get_whole_dofmap()\n"
+ " f4_stencil_size => f4_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output5 in result
if dist_mem:
output6 = (
- " IF (f3_proxy%is_dirty(depth=f3_extent)) THEN\n"
- " CALL f3_proxy%halo_exchange(depth=f3_extent)\n"
- " END IF\n"
- " IF (f4_proxy%is_dirty(depth=2)) THEN\n"
- " CALL f4_proxy%halo_exchange(depth=2)\n"
- " END IF\n")
+ " if (f3_proxy%is_dirty(depth=f3_extent)) then\n"
+ " call f3_proxy%halo_exchange(depth=f3_extent)\n"
+ " end if\n"
+ " if (f4_proxy%is_dirty(depth=2)) then\n"
+ " call f4_proxy%halo_exchange(depth=2)\n"
+ " end if\n")
assert output6 in result
output7 = (
- " CALL testkern_stencil_multi_int_field_code(nlayers_f1, "
+ " call testkern_stencil_multi_int_field_code(nlayers_f1, "
"f1_data, f2_data, f2_stencil_size(cell), "
"f2_stencil_dofmap(:,:,cell), f3_data, f3_stencil_size(cell), "
"f3_direction, f3_stencil_dofmap(:,:,cell), f4_data, "
@@ -1217,55 +1146,49 @@ def test_multiple_stencil_same_name(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
output1 = (
- " SUBROUTINE invoke_0_testkern_stencil_multi_type(f1, f2, f3, "
+ " subroutine invoke_0_testkern_stencil_multi_type(f1, f2, f3, "
"f4, extent, f3_direction)")
assert output1 in result
output2 = (
- " INTEGER(KIND=i_def), intent(in) :: extent\n"
- " INTEGER(KIND=i_def), intent(in) :: f3_direction\n")
+ " integer(kind=i_def), intent(in) :: extent\n"
+ " integer(kind=i_def), intent(in) :: f3_direction\n")
assert output2 in result
- output3 = (
- " INTEGER(KIND=i_def), pointer :: f4_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f4_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f4_stencil_map => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f3_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f3_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f3_stencil_map => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n")
- assert output3 in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f4_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f4_stencil_map => "
+ "null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f3_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f3_stencil_map => "
+ "null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
output4 = (
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,extent)\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " IF (f3_direction .eq. x_direction) THEN\n"
- " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,extent)\n"
- " END IF\n"
- " IF (f3_direction .eq. y_direction) THEN\n"
- " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,extent)\n"
- " END IF\n"
- " f3_stencil_dofmap => f3_stencil_map%get_whole_dofmap()\n"
- " f3_stencil_size => f3_stencil_map%get_stencil_sizes()\n"
- " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,extent)\n"
- " f4_stencil_dofmap => f4_stencil_map%get_whole_dofmap()\n"
- " f4_stencil_size => f4_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ " ! Initialise stencil dofmaps\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, extent)\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ " if (f3_direction == x_direction) then\n"
+ " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, extent)\n"
+ " end if\n"
+ " if (f3_direction == y_direction) then\n"
+ " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, extent)\n"
+ " end if\n"
+ " f3_stencil_dofmap => f3_stencil_map%get_whole_dofmap()\n"
+ " f3_stencil_size => f3_stencil_map%get_stencil_sizes()\n"
+ " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, extent)\n"
+ " f4_stencil_dofmap => f4_stencil_map%get_whole_dofmap()\n"
+ " f4_stencil_size => f4_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output4 in result
output5 = (
- " CALL testkern_stencil_multi_code(nlayers_f1, f1_data, "
+ " call testkern_stencil_multi_code(nlayers_f1, f1_data, "
"f2_data, f2_stencil_size(cell), f2_stencil_dofmap(:,:,cell), "
"f3_data, f3_stencil_size(cell), f3_direction, "
"f3_stencil_dofmap(:,:,cell), f4_data, f4_stencil_size(cell), "
@@ -1288,67 +1211,61 @@ def test_multi_stencil_same_name_direction(dist_mem, tmpdir):
result = str(psy.gen)
output1 = (
- "SUBROUTINE invoke_0_testkern_stencil_multi_2_type(f1, f2, f3, "
+ "subroutine invoke_0_testkern_stencil_multi_2_type(f1, f2, f3, "
"f4, extent, direction)")
assert output1 in result
output2 = (
- " INTEGER(KIND=i_def), intent(in) :: extent\n"
- " INTEGER(KIND=i_def), intent(in) :: direction\n")
+ " integer(kind=i_def), intent(in) :: extent\n"
+ " integer(kind=i_def), intent(in) :: direction\n")
assert output2 in result
- output3 = (
- " INTEGER(KIND=i_def), pointer :: f4_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f4_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f4_stencil_map => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f3_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f3_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f3_stencil_map => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map => "
- "null()\n")
- assert output3 in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f4_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f4_stencil_map => "
+ "null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f3_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f3_stencil_map => "
+ "null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map => "
+ "null()\n" in result)
output4 = (
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " IF (direction .eq. x_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,extent)\n"
- " END IF\n"
- " IF (direction .eq. y_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,extent)\n"
- " END IF\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " IF (direction .eq. x_direction) THEN\n"
- " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,extent)\n"
- " END IF\n"
- " IF (direction .eq. y_direction) THEN\n"
- " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,extent)\n"
- " END IF\n"
- " f3_stencil_dofmap => f3_stencil_map%get_whole_dofmap()\n"
- " f3_stencil_size => f3_stencil_map%get_stencil_sizes()\n"
- " IF (direction .eq. x_direction) THEN\n"
- " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,extent)\n"
- " END IF\n"
- " IF (direction .eq. y_direction) THEN\n"
- " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,extent)\n"
- " END IF\n"
- " f4_stencil_dofmap => f4_stencil_map%get_whole_dofmap()\n"
- " f4_stencil_size => f4_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ " ! Initialise stencil dofmaps\n"
+ " if (direction == x_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, extent)\n"
+ " end if\n"
+ " if (direction == y_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, extent)\n"
+ " end if\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ " if (direction == x_direction) then\n"
+ " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, extent)\n"
+ " end if\n"
+ " if (direction == y_direction) then\n"
+ " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, extent)\n"
+ " end if\n"
+ " f3_stencil_dofmap => f3_stencil_map%get_whole_dofmap()\n"
+ " f3_stencil_size => f3_stencil_map%get_stencil_sizes()\n"
+ " if (direction == x_direction) then\n"
+ " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, extent)\n"
+ " end if\n"
+ " if (direction == y_direction) then\n"
+ " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, extent)\n"
+ " end if\n"
+ " f4_stencil_dofmap => f4_stencil_map%get_whole_dofmap()\n"
+ " f4_stencil_size => f4_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output4 in result
output5 = (
- " CALL testkern_stencil_multi_2_code(nlayers_f1, f1_data, "
+ " call testkern_stencil_multi_2_code(nlayers_f1, f1_data, "
"f2_data, f2_stencil_size(cell), direction, "
"f2_stencil_dofmap(:,:,cell), "
"f3_data, f3_stencil_size(cell), direction, "
@@ -1380,59 +1297,49 @@ def test_multi_kerns_stencils_diff_fields(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
output1 = (
- " SUBROUTINE invoke_0(f1, f2a, f3, f4, f2b, f2c, f2a_extent, "
+ " subroutine invoke_0(f1, f2a, f3, f4, f2b, f2c, f2a_extent, "
"extent)")
assert output1 in result
- assert "USE testkern_stencil_mod, ONLY: testkern_stencil_code\n" in result
- output2 = (
- " USE stencil_dofmap_mod, ONLY: STENCIL_CROSS\n"
- " USE stencil_dofmap_mod, ONLY: stencil_dofmap_type\n")
- assert output2 in result
- output3 = (
- " INTEGER(KIND=i_def), intent(in) :: f2a_extent, extent\n")
- assert output3 in result
- output4 = (
- " INTEGER(KIND=i_def), pointer :: f2b_stencil_size(:) => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2b_stencil_dofmap(:,:,:) "
- "=> null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2b_stencil_map "
- "=> null()\n"
- " INTEGER(KIND=i_def), pointer :: f2a_stencil_size(:) => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2a_stencil_dofmap(:,:,:) "
- "=> null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2a_stencil_map "
- "=> null()\n")
- assert output4 in result
+ assert "use testkern_stencil_mod, only : testkern_stencil_code\n" in result
+ assert ("use stencil_dofmap_mod, only : STENCIL_CROSS, "
+ "stencil_dofmap_type\n" in result)
+ assert "integer(kind=i_def), intent(in) :: f2a_extent\n" in result
+ assert "integer(kind=i_def), intent(in) :: extent\n" in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2b_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2b_stencil_map "
+ "=> null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2a_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2a_stencil_map "
+ "=> null()\n" in result)
output5 = (
- " !\n"
- " f2a_stencil_map => f2a_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,f2a_extent)\n"
- " f2a_stencil_dofmap => f2a_stencil_map%get_whole_dofmap()\n"
- " f2a_stencil_size => f2a_stencil_map%get_stencil_sizes()\n"
- " f2b_stencil_map => f2b_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,extent)\n"
- " f2b_stencil_dofmap => f2b_stencil_map%get_whole_dofmap()\n"
- " f2b_stencil_size => f2b_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ " f2a_stencil_map => f2a_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, f2a_extent)\n"
+ " f2a_stencil_dofmap => f2a_stencil_map%get_whole_dofmap()\n"
+ " f2a_stencil_size => f2a_stencil_map%get_stencil_sizes()\n"
+ " f2b_stencil_map => f2b_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, extent)\n"
+ " f2b_stencil_dofmap => f2b_stencil_map%get_whole_dofmap()\n"
+ " f2b_stencil_size => f2b_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output5 in result
output6 = (
- " CALL testkern_stencil_code(nlayers_f1, f1_data, "
+ " call testkern_stencil_code(nlayers_f1, f1_data, "
"f2a_data, f2a_stencil_size(cell), "
"f2a_stencil_dofmap(:,:,cell), f3_data, f4_data, ndf_w1, "
"undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, "
"undf_w3, map_w3(:,cell))")
assert output6 in result
output7 = (
- " CALL testkern_stencil_code(nlayers_f1, f1_data, "
+ " call testkern_stencil_code(nlayers_f1, f1_data, "
"f2b_data, f2b_stencil_size(cell), "
"f2b_stencil_dofmap(:,:,cell), f3_data, f4_data, ndf_w1, "
"undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, "
"undf_w3, map_w3(:,cell))")
assert output7 in result
output8 = (
- " CALL testkern_stencil_code(nlayers_f1, f1_data, "
+ " call testkern_stencil_code(nlayers_f1, f1_data, "
"f2c_data, f2b_stencil_size(cell), "
"f2b_stencil_dofmap(:,:,cell), f3_data, f4_data, ndf_w1, "
"undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, "
@@ -1457,70 +1364,68 @@ def test_extent_name_clash(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
output1 = (
- " SUBROUTINE invoke_0(f2_stencil_map, f2, f2_stencil_dofmap, "
- "stencil_cross_1, f3_stencil_map, f3, f3_stencil_dofmap, "
+ " subroutine invoke_0(f2_stencil_map, f2, f2_stencil_dofmap, "
+ "stencil_cross_arg, f3_stencil_map, f3, f3_stencil_dofmap, "
"f2_extent, f3_stencil_size)")
assert output1 in result
- output2 = (
- " USE stencil_dofmap_mod, ONLY: STENCIL_CROSS\n"
- " USE stencil_dofmap_mod, ONLY: stencil_dofmap_type")
- assert output2 in result
- assert ("INTEGER(KIND=i_def), intent(in) :: f2_extent, f3_stencil_size\n"
- in result)
- output3 = (
- " TYPE(field_type), intent(in) :: f2_stencil_map, f2, "
- "f2_stencil_dofmap, stencil_cross_1, f3_stencil_map, f3, "
- "f3_stencil_dofmap\n")
- assert output3 in result
- output4 = (
- " INTEGER(KIND=i_def), pointer :: f3_stencil_size_1(:) => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f3_stencil_dofmap_1(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f3_stencil_map_1 => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap_1(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map_1 => "
- "null()\n")
- assert output4 in result
- output5 = (
- " TYPE(field_proxy_type) f2_stencil_map_proxy, f2_proxy, "
- "f2_stencil_dofmap_proxy, stencil_cross_1_proxy, "
- "f3_stencil_map_proxy, f3_proxy, f3_stencil_dofmap_proxy\n")
- assert output5 in result
- output6 = (
- " stencil_cross_1_proxy = stencil_cross_1%get_proxy()")
- assert output6 in result
+ assert ("use stencil_dofmap_mod, only : STENCIL_CROSS, "
+ "stencil_dofmap_type\n" in result)
+ assert "integer(kind=i_def), intent(in) :: f2_extent\n" in result
+ assert "integer(kind=i_def), intent(in) :: f3_stencil_size\n" in result
+ assert "type(field_type), intent(in) :: f2_stencil_map" in result
+ assert "type(field_type), intent(in) :: f2" in result
+ assert "type(field_type), intent(in) :: f2_stencil_dofmap" in result
+ assert "type(field_type), intent(in) :: stencil_cross_arg" in result
+ assert "type(field_type), intent(in) :: f3_stencil_map" in result
+ assert "type(field_type), intent(in) :: f3" in result
+ assert "type(field_type), intent(in) :: f3_stencil_dofmap" in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: "
+ "f3_stencil_size_1 => null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:,:,:) :: "
+ "f3_stencil_dofmap_1 => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f3_stencil_map_1 => "
+ "null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: "
+ "f2_stencil_size => null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:,:,:) :: "
+ "f2_stencil_dofmap_1 => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map_1 => "
+ "null()\n" in result)
+ assert "type(field_proxy_type) :: f2_stencil_map_proxy" in result
+ assert "type(field_proxy_type) :: f2_proxy" in result
+ assert "type(field_proxy_type) :: f2_stencil_dofmap_proxy" in result
+ assert "type(field_proxy_type) :: stencil_cross_arg_proxy" in result
+ assert "type(field_proxy_type) :: f3_stencil_map_proxy" in result
+ assert "type(field_proxy_type) :: f3_proxy" in result
+ assert "type(field_proxy_type) :: f3_stencil_dofmap_proxy" in result
+ assert "stencil_cross_arg_proxy = stencil_cross_arg%get_proxy()" in result
output7 = (
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,f2_extent)\n"
- " f2_stencil_dofmap_1 => "
+ " ! Initialise stencil dofmaps\n"
+ " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, f2_extent)\n"
+ " f2_stencil_dofmap_1 => "
"f2_stencil_map_1%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map_1%get_stencil_sizes()\n"
- " f3_stencil_map_1 => f3_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,f3_stencil_size)\n"
- " f3_stencil_dofmap_1 => "
+ " f2_stencil_size => f2_stencil_map_1%get_stencil_sizes()\n"
+ " f3_stencil_map_1 => f3_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, f3_stencil_size)\n"
+ " f3_stencil_dofmap_1 => "
"f3_stencil_map_1%get_whole_dofmap()\n"
- " f3_stencil_size_1 => f3_stencil_map_1%get_stencil_sizes()\n"
- " !\n")
+ " f3_stencil_size_1 => f3_stencil_map_1%get_stencil_sizes()\n"
+ )
assert output7 in result
output8 = (
- " CALL testkern_stencil_code(nlayers_f2_stencil_map, "
+ " call testkern_stencil_code(nlayers_f2_stencil_map, "
"f2_stencil_map_data, f2_data, f2_stencil_size(cell), "
"f2_stencil_dofmap_1(:,:,cell), f2_stencil_dofmap_data, "
- "stencil_cross_1_data, ndf_w1, undf_w1, map_w1(:,cell), "
+ "stencil_cross_arg_data, ndf_w1, undf_w1, map_w1(:,cell), "
"ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3, "
"map_w3(:,cell))")
assert output8 in result
output9 = (
- " CALL testkern_stencil_code(nlayers_f3_stencil_map, "
+ " call testkern_stencil_code(nlayers_f3_stencil_map, "
"f3_stencil_map_data, f3_data, f3_stencil_size_1(cell), "
"f3_stencil_dofmap_1(:,:,cell), f3_stencil_dofmap_data, "
- "stencil_cross_1_data, ndf_w1, undf_w1, map_w1(:,cell), "
+ "stencil_cross_arg_data, ndf_w1, undf_w1, map_w1(:,cell), "
"ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3, "
"map_w3(:,cell))")
assert output9 in result
@@ -1540,42 +1445,34 @@ def test_two_stencils_same_field(tmpdir, dist_mem):
result = str(psy.gen)
output1 = (
- " SUBROUTINE invoke_0(f1_w1, f2_w2, f3_w2, f4_w3, f1_w3, "
+ " subroutine invoke_0(f1_w1, f2_w2, f3_w2, f4_w3, f1_w3, "
"f2_extent, extent)")
assert output1 in result
- output2 = (
- " INTEGER(KIND=i_def), pointer :: f2_w2_stencil_size_1(:) => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_w2_stencil_dofmap_1(:,:,:) "
- "=> null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_w2_stencil_map_1 "
- "=> null()")
- assert output2 in result
- output3 = (
- " INTEGER(KIND=i_def), pointer :: f2_w2_stencil_size(:) => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_w2_stencil_dofmap(:,:,:) "
- "=> null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_w2_stencil_map "
- "=> null()")
- assert output3 in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: "
+ "f2_w2_stencil_size_1 => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_w2_stencil_map_1 "
+ "=> null()" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: "
+ "f2_w2_stencil_size => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_w2_stencil_map "
+ "=> null()" in result)
output4 = (
- " f2_w2_stencil_map => f2_w2_proxy%vspace%get_stencil_dofmap"
- "(STENCIL_CROSS,f2_extent)\n"
- " f2_w2_stencil_dofmap => "
+ " f2_w2_stencil_map => f2_w2_proxy%vspace%get_stencil_dofmap"
+ "(STENCIL_CROSS, f2_extent)\n"
+ " f2_w2_stencil_dofmap => "
"f2_w2_stencil_map%get_whole_dofmap()\n"
- " f2_w2_stencil_size => f2_w2_stencil_map%get_stencil_sizes()\n")
+ " f2_w2_stencil_size => f2_w2_stencil_map%get_stencil_sizes()\n")
assert output4 in result
output5 = (
- " f2_w2_stencil_map_1 => "
- "f2_w2_proxy%vspace%get_stencil_dofmap(STENCIL_CROSS,extent)\n"
- " f2_w2_stencil_dofmap_1 => "
+ " f2_w2_stencil_map_1 => "
+ "f2_w2_proxy%vspace%get_stencil_dofmap(STENCIL_CROSS, extent)\n"
+ " f2_w2_stencil_dofmap_1 => "
"f2_w2_stencil_map_1%get_whole_dofmap()\n"
- " f2_w2_stencil_size_1 => "
+ " f2_w2_stencil_size_1 => "
"f2_w2_stencil_map_1%get_stencil_sizes()\n")
assert output5 in result
output6 = (
- " CALL testkern_stencil_code(nlayers_f1_w1, f1_w1_data, "
+ " call testkern_stencil_code(nlayers_f1_w1, f1_w1_data, "
"f2_w2_data, f2_w2_stencil_size(cell), "
"f2_w2_stencil_dofmap(:,:,cell), "
"f3_w2_data, f4_w3_data, ndf_w1, undf_w1, "
@@ -1583,7 +1480,7 @@ def test_two_stencils_same_field(tmpdir, dist_mem):
"undf_w3, map_w3(:,cell))")
assert output6 in result
output7 = (
- " CALL testkern_stencil_depth_code(nlayers_f1_w3, "
+ " call testkern_stencil_depth_code(nlayers_f1_w3, "
"f1_w3_data, f1_w1_data, f1_w1_stencil_size(cell), "
"f1_w1_stencil_dofmap(:,:,cell), f2_w2_data, "
"f2_w2_stencil_size_1(cell), "
@@ -1614,42 +1511,35 @@ def test_stencils_same_field_literal_extent(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- output1 = (
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size_1(:) => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap_1(:,:,:) "
- "=> null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map_1 "
- "=> null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) "
- "=> null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map "
- "=> null()")
- assert output1 in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size_1 "
+ "=> null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map_1 "
+ "=> null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size => "
+ "null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map "
+ "=> null()" in result)
output2 = (
- " !\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,1)\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,2)\n"
- " f2_stencil_dofmap_1 => "
+ "\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, 1)\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, 2)\n"
+ " f2_stencil_dofmap_1 => "
"f2_stencil_map_1%get_whole_dofmap()\n"
- " f2_stencil_size_1 => f2_stencil_map_1%get_stencil_sizes()\n"
- " !")
+ " f2_stencil_size_1 => f2_stencil_map_1%get_stencil_sizes()\n")
assert output2 in result
output3 = (
- " CALL testkern_stencil_code(nlayers_f1, f1_data, "
+ " call testkern_stencil_code(nlayers_f1, f1_data, "
"f2_data, f2_stencil_size(cell), f2_stencil_dofmap(:,:,cell), "
"f3_data, f4_data, ndf_w1, undf_w1, map_w1(:,cell), "
"ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3, "
"map_w3(:,cell))")
assert result.count(output3) == 2
output4 = (
- " CALL testkern_stencil_code(nlayers_f1, f1_data, "
+ " call testkern_stencil_code(nlayers_f1, f1_data, "
"f2_data, f2_stencil_size_1(cell), "
"f2_stencil_dofmap_1(:,:,cell), f3_data, f4_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), "
@@ -1657,12 +1547,12 @@ def test_stencils_same_field_literal_extent(dist_mem, tmpdir):
assert result.count(output4) == 1
if dist_mem:
- assert "IF (f2_proxy%is_dirty(depth=3)) THEN" in result
- assert "CALL f2_proxy%halo_exchange(depth=3)" in result
- assert "IF (f3_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL f3_proxy%halo_exchange(depth=1)" in result
- assert "IF (f4_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL f4_proxy%halo_exchange(depth=1)" in result
+ assert "if (f2_proxy%is_dirty(depth=3)) then" in result
+ assert "call f2_proxy%halo_exchange(depth=3)" in result
+ assert "if (f3_proxy%is_dirty(depth=1)) then" in result
+ assert "call f3_proxy%halo_exchange(depth=1)" in result
+ assert "if (f4_proxy%is_dirty(depth=1)) then" in result
+ assert "call f4_proxy%halo_exchange(depth=1)" in result
def test_stencils_same_field_literal_direct(dist_mem, tmpdir):
@@ -1682,53 +1572,48 @@ def test_stencils_same_field_literal_direct(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- output1 = (
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size_1(:) => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap_1(:,:,:) "
- "=> null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map_1 "
- "=> null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f2_stencil_dofmap(:,:,:) "
- "=> null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f2_stencil_map "
- "=> null()")
- assert output1 in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size_1 "
+ "=> null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map_1 "
+ "=> null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f2_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f2_stencil_map "
+ "=> null()" in result)
output2 = (
- " !\n"
- " IF (x_direction .eq. x_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,2)\n"
- " END IF\n"
- " IF (x_direction .eq. y_direction) THEN\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,2)\n"
- " END IF\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " IF (y_direction .eq. x_direction) THEN\n"
- " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,2)\n"
- " END IF\n"
- " IF (y_direction .eq. y_direction) THEN\n"
- " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,2)\n"
- " END IF\n"
- " f2_stencil_dofmap_1 => "
+ "\n"
+ " if (x_direction == x_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, 2)\n"
+ " end if\n"
+ " if (x_direction == y_direction) then\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, 2)\n"
+ " end if\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ " if (y_direction == x_direction) then\n"
+ " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, 2)\n"
+ " end if\n"
+ " if (y_direction == y_direction) then\n"
+ " f2_stencil_map_1 => f2_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, 2)\n"
+ " end if\n"
+ " f2_stencil_dofmap_1 => "
"f2_stencil_map_1%get_whole_dofmap()\n"
- " f2_stencil_size_1 => f2_stencil_map_1%get_stencil_sizes()\n"
- " !")
+ " f2_stencil_size_1 => f2_stencil_map_1%get_stencil_sizes()\n"
+ )
assert output2 in result
output3 = (
- " CALL testkern_stencil_xory1d_code(nlayers_f1, "
+ " call testkern_stencil_xory1d_code(nlayers_f1, "
"f1_data, f2_data, f2_stencil_size(cell), x_direction, "
"f2_stencil_dofmap(:,:,cell), f3_data, f4_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, "
"map_w2(:,cell), ndf_w3, undf_w3, map_w3(:,cell))")
assert result.count(output3) == 2
output4 = (
- " CALL testkern_stencil_xory1d_code(nlayers_f1, "
+ " call testkern_stencil_xory1d_code(nlayers_f1, "
"f1_data, f2_data, f2_stencil_size_1(cell), y_direction, "
"f2_stencil_dofmap_1(:,:,cell), f3_data, f4_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, "
@@ -1736,12 +1621,12 @@ def test_stencils_same_field_literal_direct(dist_mem, tmpdir):
assert result.count(output4) == 1
if dist_mem:
- assert "IF (f2_proxy%is_dirty(depth=3)) THEN" in result
- assert "CALL f2_proxy%halo_exchange(depth=3)" in result
- assert "IF (f3_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL f3_proxy%halo_exchange(depth=1)" in result
- assert "IF (f4_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL f4_proxy%halo_exchange(depth=1)" in result
+ assert "if (f2_proxy%is_dirty(depth=3)) then" in result
+ assert "call f2_proxy%halo_exchange(depth=3)" in result
+ assert "if (f3_proxy%is_dirty(depth=1)) then" in result
+ assert "call f3_proxy%halo_exchange(depth=1)" in result
+ assert "if (f4_proxy%is_dirty(depth=1)) then" in result
+ assert "call f4_proxy%halo_exchange(depth=1)" in result
def test_stencil_extent_specified():
@@ -1783,46 +1668,41 @@ def test_one_kern_multi_field_same_stencil(tmpdir, dist_mem):
assert LFRicBuild(tmpdir).code_compiles(psy)
output1 = (
- " SUBROUTINE invoke_0_testkern_multi_field_same_stencil_type("
+ " subroutine invoke_0_testkern_multi_field_same_stencil_type("
"f0, f1, f2, f3, f4, extent, direction)")
assert output1 in result
output2 = (
- " INTEGER(KIND=i_def), intent(in) :: extent\n"
- " INTEGER(KIND=i_def), intent(in) :: direction\n")
+ " integer(kind=i_def), intent(in) :: extent\n"
+ " integer(kind=i_def), intent(in) :: direction\n")
assert output2 in result
- output3 = (
- " INTEGER(KIND=i_def), pointer :: f3_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f3_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f3_stencil_map => "
- "null()\n"
- " INTEGER(KIND=i_def), pointer :: f1_stencil_size(:) => null()\n"
- " INTEGER(KIND=i_def), pointer :: f1_stencil_dofmap(:,:,:) => "
- "null()\n"
- " TYPE(stencil_dofmap_type), pointer :: f1_stencil_map => "
- "null()\n")
- assert output3 in result
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f3_stencil_size "
+ "=> null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f3_stencil_map => "
+ "null()\n" in result)
+ assert ("integer(kind=i_def), pointer, dimension(:) :: f1_stencil_size"
+ " => null()\n" in result)
+ assert ("type(stencil_dofmap_type), pointer :: f1_stencil_map => "
+ "null()\n" in result)
output4 = (
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " f1_stencil_map => f1_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,extent)\n"
- " f1_stencil_dofmap => f1_stencil_map%get_whole_dofmap()\n"
- " f1_stencil_size => f1_stencil_map%get_stencil_sizes()\n"
- " IF (direction .eq. x_direction) THEN\n"
- " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DX,extent)\n"
- " END IF\n"
- " IF (direction .eq. y_direction) THEN\n"
- " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
- "STENCIL_1DY,extent)\n"
- " END IF\n"
- " f3_stencil_dofmap => f3_stencil_map%get_whole_dofmap()\n"
- " f3_stencil_size => f3_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ " ! Initialise stencil dofmaps\n"
+ " f1_stencil_map => f1_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, extent)\n"
+ " f1_stencil_dofmap => f1_stencil_map%get_whole_dofmap()\n"
+ " f1_stencil_size => f1_stencil_map%get_stencil_sizes()\n"
+ " if (direction == x_direction) then\n"
+ " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DX, extent)\n"
+ " end if\n"
+ " if (direction == y_direction) then\n"
+ " f3_stencil_map => f3_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_1DY, extent)\n"
+ " end if\n"
+ " f3_stencil_dofmap => f3_stencil_map%get_whole_dofmap()\n"
+ " f3_stencil_size => f3_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output4 in result
output5 = (
- " CALL testkern_multi_field_same_stencil_code(nlayers_f0, "
+ " call testkern_multi_field_same_stencil_code(nlayers_f0, "
"f0_data, f1_data, f1_stencil_size(cell), "
"f1_stencil_dofmap(:,:,cell), f2_data, f1_stencil_size(cell), "
"f1_stencil_dofmap(:,:,cell), f3_data, f3_stencil_size(cell), "
@@ -1854,30 +1734,30 @@ def test_single_kernel_any_space_stencil(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
output1 = (
- " f1_stencil_map => f1_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,extent)\n"
- " f1_stencil_dofmap => f1_stencil_map%get_whole_dofmap()\n"
- " f1_stencil_size => f1_stencil_map%get_stencil_sizes()\n"
- " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,extent)\n"
- " f4_stencil_dofmap => f4_stencil_map%get_whole_dofmap()\n"
- " f4_stencil_size => f4_stencil_map%get_stencil_sizes()\n"
- " f5_stencil_map => f5_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,extent)\n"
- " f5_stencil_dofmap => f5_stencil_map%get_whole_dofmap()\n"
- " f5_stencil_size => f5_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ " f1_stencil_map => f1_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, extent)\n"
+ " f1_stencil_dofmap => f1_stencil_map%get_whole_dofmap()\n"
+ " f1_stencil_size => f1_stencil_map%get_stencil_sizes()\n"
+ " f4_stencil_map => f4_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, extent)\n"
+ " f4_stencil_dofmap => f4_stencil_map%get_whole_dofmap()\n"
+ " f4_stencil_size => f4_stencil_map%get_stencil_sizes()\n"
+ " f5_stencil_map => f5_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, extent)\n"
+ " f5_stencil_dofmap => f5_stencil_map%get_whole_dofmap()\n"
+ " f5_stencil_size => f5_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output1 in result
# Use the same stencil dofmap
output2 = (
- " CALL testkern_same_anyspace_stencil_code(nlayers_f0, "
+ " call testkern_same_anyspace_stencil_code(nlayers_f0, "
"f0_data, f1_data, f1_stencil_size(cell), "
"f1_stencil_dofmap(:,:,cell), f2_data, f1_stencil_size(cell), "
"f1_stencil_dofmap(:,:,cell), ndf_w1, undf_w1, map_w1(:,cell), "
"ndf_aspc1_f1, undf_aspc1_f1, map_aspc1_f1(:,cell))")
assert output2 in result
output3 = (
- " CALL testkern_different_anyspace_stencil_code(nlayers_f3, "
+ " call testkern_different_anyspace_stencil_code(nlayers_f3, "
"f3_data, f4_data, f4_stencil_size(cell), "
"f4_stencil_dofmap(:,:,cell), f5_data, f5_stencil_size(cell), "
"f5_stencil_dofmap(:,:,cell), ndf_w1, undf_w1, map_w1(:,cell), "
@@ -1908,21 +1788,21 @@ def test_multi_kernel_any_space_stencil_1(dist_mem):
result = str(psy.gen)
output1 = (
- " f1_stencil_map => f1_proxy%vspace%get_stencil_dofmap("
- "STENCIL_CROSS,extent)\n"
- " f1_stencil_dofmap => f1_stencil_map%get_whole_dofmap()\n"
- " f1_stencil_size => f1_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ " f1_stencil_map => f1_proxy%vspace%get_stencil_dofmap("
+ "STENCIL_CROSS, extent)\n"
+ " f1_stencil_dofmap => f1_stencil_map%get_whole_dofmap()\n"
+ " f1_stencil_size => f1_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output1 in result
output2 = (
- " CALL testkern_same_anyspace_stencil_code(nlayers, "
+ " call testkern_same_anyspace_stencil_code(nlayers, "
"f0_data, f1_data, f1_stencil_size(cell), "
"f1_stencil_dofmap(:,:,cell), f2_data, f1_stencil_size(cell), "
"f1_stencil_dofmap(:,:,cell), ndf_w1, undf_w1, map_w1(:,cell), "
"ndf_aspc1_f1, undf_aspc1_f1, map_aspc1_f1)")
assert output2 in result
output3 = (
- " CALL testkern_different_anyspace_stencil_code(nlayers, "
+ " call testkern_different_anyspace_stencil_code(nlayers, "
"f3_data, f1_data, f1_stencil_size(cell), "
"f1_stencil_dofmap(:,:,cell), f2_data, f1_stencil_size(cell), "
"f1_stencil_dofmap(:,:,cell), ndf_w1, undf_w1, map_w1(:,cell), "
@@ -1984,9 +1864,9 @@ def test_lfricstencils_err():
# Break internal state
stencils._kern_args[0].descriptor.stencil['type'] = "not-a-type"
with pytest.raises(GenerationError) as err:
- stencils.initialise(ModuleGen(name="testmodule"))
- assert "Unsupported stencil type 'not-a-type' supplied." in str(err.value)
- with pytest.raises(GenerationError) as err:
- stencils._declare_maps_invoke(ModuleGen(name="testmodule"))
+ stencils._declare_maps_invoke()
assert "Unsupported stencil type 'not-a-type' supplied. Supported " \
"mappings are" in str(err.value)
+ with pytest.raises(GenerationError) as err:
+ stencils.initialise(0)
+ assert "Unsupported stencil type 'not-a-type' supplied." in str(err.value)
diff --git a/src/psyclone/tests/domain/lfric/transformations/dynamo0p3_transformations_test.py b/src/psyclone/tests/domain/lfric/transformations/dynamo0p3_transformations_test.py
index d7eff3a004..0025bc5531 100644
--- a/src/psyclone/tests/domain/lfric/transformations/dynamo0p3_transformations_test.py
+++ b/src/psyclone/tests/domain/lfric/transformations/dynamo0p3_transformations_test.py
@@ -53,8 +53,10 @@
LFRicHaloExchange)
from psyclone.errors import GenerationError, InternalError
from psyclone.psyGen import InvokeSchedule, GlobalSum, BuiltIn
-from psyclone.psyir.nodes import (colored, Loop, Schedule, Literal, Directive,
- OMPDoDirective, ACCEnterDataDirective)
+from psyclone.psyir.backend.visitor import VisitorError
+from psyclone.psyir.nodes import (
+ colored, Loop, Schedule, Literal, Directive, OMPDoDirective,
+ ACCEnterDataDirective, Assignment, Reference)
from psyclone.psyir.symbols import (AutomaticInterface, ScalarType, ArrayType,
REAL_TYPE, INTEGER_TYPE)
from psyclone.psyir.transformations import (
@@ -165,8 +167,8 @@ def test_colour_trans_declarations(tmpdir, dist_mem):
# Check that we've declared the loop-related variables
# and colour-map pointers
assert "integer(kind=i_def), pointer :: cmap(:,:)" in gen
- assert "integer(kind=i_def) ncolour" in gen
- assert "integer(kind=i_def) colour" in gen
+ assert "integer(kind=i_def) :: ncolour" in gen
+ assert "integer(kind=i_def) :: colour" in gen
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -193,16 +195,16 @@ def test_colour_trans(tmpdir, dist_mem):
gen = str(psy.gen).lower()
if dist_mem:
- assert ("integer(kind=i_def), allocatable :: "
- "last_halo_cell_all_colours(:,:)" in gen)
+ assert ("integer(kind=i_def), allocatable, dimension(:,:) :: "
+ "last_halo_cell_all_colours" in gen)
else:
- assert ("integer(kind=i_def), allocatable :: "
- "last_edge_cell_all_colours(:)" in gen)
+ assert ("integer(kind=i_def), allocatable, dimension(:) :: "
+ "last_edge_cell_all_colours" in gen)
# Check that we're calling the API to get the no. of colours
# and the generated loop bounds are correct
- output = (" ncolour = mesh%get_ncolours()\n"
- " cmap => mesh%get_colour_map()\n")
+ output = (" ncolour = mesh%get_ncolours()\n"
+ " cmap => mesh%get_colour_map()\n")
assert output in gen
assert "loop0_start = 1" in gen
@@ -213,15 +215,15 @@ def test_colour_trans(tmpdir, dist_mem):
assert ("last_halo_cell_all_colours = mesh%get_last_halo_cell_all_"
"colours()" in gen)
output = (
- " do colour = loop0_start, loop0_stop, 1\n"
- " do cell = loop1_start, last_halo_cell_all_colours(colour,"
+ " do colour = loop0_start, loop0_stop, 1\n"
+ " do cell = loop1_start, last_halo_cell_all_colours(colour,"
"1), 1\n")
else: # not dist_mem
assert ("last_edge_cell_all_colours = mesh%get_last_edge_cell_all_"
"colours()" in gen)
output = (
- " do colour = loop0_start, loop0_stop, 1\n"
- " do cell = loop1_start, "
+ " do colour = loop0_start, loop0_stop, 1\n"
+ " do cell = loop1_start, "
"last_edge_cell_all_colours(colour), 1\n")
assert output in gen
@@ -237,12 +239,11 @@ def test_colour_trans(tmpdir, dist_mem):
# Check that we get the right number of set_dirty halo calls in
# the correct location
dirty_str = (
- " end do\n"
- " !\n"
- " ! set halos dirty/clean for fields modified in the "
- "above loop\n"
- " !\n"
- " call f1_proxy%set_dirty()\n")
+ " enddo\n"
+ "\n"
+ " ! set halos dirty/clean for fields modified in the "
+ "above loop(s)\n"
+ " call f1_proxy%set_dirty()\n")
assert dirty_str in gen
assert gen.count("set_dirty()") == 1
@@ -273,7 +274,7 @@ def test_colour_trans_operator(tmpdir, dist_mem):
gen = str(psy.gen)
# check the first argument is a colourmap lookup
- assert "CALL testkern_operator_code(cmap(colour,cell), nlayers" in gen
+ assert "call testkern_operator_code(cmap(colour,cell), nlayers" in gen
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -306,13 +307,13 @@ def test_colour_trans_cma_operator(tmpdir, dist_mem):
lookup = "last_edge_cell_all_colours(colour)"
assert (
- f" DO colour = loop0_start, loop0_stop, 1\n"
- f" DO cell = loop1_start, {lookup}, 1\n"
- f" CALL columnwise_op_asm_field_kernel_code("
+ f" do colour = loop0_start, loop0_stop, 1\n"
+ f" do cell = loop1_start, {lookup}, 1\n"
+ f" call columnwise_op_asm_field_kernel_code("
f"cmap(colour,") in gen
assert (
- " CALL columnwise_op_asm_field_kernel_code(cmap(colour,"
+ " call columnwise_op_asm_field_kernel_code(cmap(colour,"
"cell), nlayers_afield, ncell_2d, afield_data, "
"lma_op1_proxy%ncell_3d, lma_op1_local_stencil, "
"cma_op1_cma_matrix(:,:,:), cma_op1_nrow, "
@@ -321,8 +322,8 @@ def test_colour_trans_cma_operator(tmpdir, dist_mem):
"ndf_aspc1_afield, undf_aspc1_afield, "
"map_aspc1_afield(:,cmap(colour,cell)), cbanded_map_aspc1_afield, "
"ndf_aspc2_lma_op1, cbanded_map_aspc2_lma_op1)\n"
- " END DO\n"
- " END DO\n") in gen
+ " enddo\n"
+ " enddo\n") in gen
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -347,7 +348,7 @@ def test_colour_trans_stencil(dist_mem, tmpdir):
# Check that we index the stencil dofmap appropriately
assert (
- " CALL testkern_stencil_code(nlayers_f1, f1_data, "
+ " call testkern_stencil_code(nlayers_f1, f1_data, "
"f2_data, f2_stencil_size(cmap(colour,cell)), "
"f2_stencil_dofmap(:,:,cmap(colour,cell)), f3_data, "
"f4_data, ndf_w1, undf_w1, map_w1(:,cmap(colour,cell)), "
@@ -383,7 +384,7 @@ def test_colour_trans_adjacent_face(dist_mem, tmpdir):
# Check that we index the adjacent face dofmap appropriately
assert (
- "CALL testkern_mesh_prop_code(nlayers_f1, a, f1_data, ndf_w1, "
+ "call testkern_mesh_prop_code(nlayers_f1, a, f1_data, ndf_w1, "
"undf_w1, map_w1(:,cmap(colour,cell)), nfaces_re_h, "
"adjacent_face(:,cmap(colour,cell))" in gen)
@@ -408,7 +409,7 @@ def test_colour_trans_continuous_write(dist_mem, tmpdir):
# enabled.
assert ("last_edge_cell_all_colours = "
"mesh%get_last_edge_cell_all_colours()" in gen)
- assert "DO cell = loop1_start, last_edge_cell_all_colours(colour)" in gen
+ assert "do cell = loop1_start, last_edge_cell_all_colours(colour)" in gen
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -428,17 +429,17 @@ def test_colour_continuous_writer_intergrid(tmpdir, dist_mem):
ctrans.apply(loop)
result = str(psy.gen).lower()
# Declarations.
- assert ("integer(kind=i_def), allocatable :: "
- "last_edge_cell_all_colours_field1(:)" in result)
+ assert ("integer(kind=i_def), allocatable, dimension(:) :: "
+ "last_edge_cell_all_colours_field1" in result)
# Initialisation.
assert ("last_edge_cell_all_colours_field1 = mesh_field1%"
"get_last_edge_cell_all_colours()" in result)
# Usage. Since there is no need to loop into the halo, the upper loop
# bound should be independent of whether or not DM is enabled.
upper_bound = "last_edge_cell_all_colours_field1(colour)"
- assert (f" do colour = loop0_start, loop0_stop, 1\n"
- f" do cell = loop1_start, {upper_bound}, 1\n"
- f" call restrict_w2_code(nlayers" in result)
+ assert (f" do colour = loop0_start, loop0_stop, 1\n"
+ f" do cell = loop1_start, {upper_bound}, 1\n"
+ f" call restrict_w2_code(nlayers" in result)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -551,17 +552,17 @@ def test_omp_colour_trans(tmpdir, dist_mem):
code = str(psy.gen)
- assert (" ncolour = mesh%get_ncolours()\n"
- " cmap => mesh%get_colour_map()\n" in code)
+ assert (" ncolour = mesh%get_ncolours()\n"
+ " cmap => mesh%get_colour_map()\n" in code)
if dist_mem:
lookup = "last_halo_cell_all_colours(colour,1)"
else:
lookup = "last_edge_cell_all_colours(colour)"
output = (
- f" DO colour = loop0_start, loop0_stop, 1\n"
- f" !$omp parallel do default(shared), private(cell), "
+ f" do colour = loop0_start, loop0_stop, 1\n"
+ f" !$omp parallel do default(shared), private(cell), "
f"schedule(static)\n"
- f" DO cell = loop1_start, {lookup}, 1\n")
+ f" do cell = loop1_start, {lookup}, 1\n")
assert output in code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -800,7 +801,7 @@ def test_omp_region_omp_do(dist_mem):
else:
assert "loop0_stop = m2_proxy%vspace%get_ncell()" in code
for idx, line in enumerate(code.split('\n')):
- if "DO cell = loop0_start, loop0_stop" in line:
+ if "do cell = loop0_start, loop0_stop" in line:
cell_loop_idx = idx
if "!$omp do" in line:
omp_do_idx = idx
@@ -808,7 +809,7 @@ def test_omp_region_omp_do(dist_mem):
omp_para_idx = idx
if "!$omp end do" in line:
omp_enddo_idx = idx
- if "END DO" in line:
+ if "enddo" in line:
cell_end_loop_idx = idx
assert (omp_do_idx - omp_para_idx) == 1
@@ -856,7 +857,7 @@ def test_omp_region_omp_do_rwdisc(monkeypatch, annexed, dist_mem):
assert "loop0_stop = mesh%get_last_edge_cell()" in code
else:
assert "loop0_stop = f1_proxy%vspace%get_ncell()" in code
- loop_str = "DO cell = loop0_start, loop0_stop"
+ loop_str = "do cell = loop0_start, loop0_stop"
for idx, line in enumerate(code.split('\n')):
if loop_str in line:
cell_loop_idx = idx
@@ -866,7 +867,7 @@ def test_omp_region_omp_do_rwdisc(monkeypatch, annexed, dist_mem):
omp_para_idx = idx
if "!$omp end do" in line:
omp_enddo_idx = idx
- if "END DO" in line:
+ if "enddo" in line:
cell_end_loop_idx = idx
assert (omp_do_idx - omp_para_idx) == 1
@@ -914,7 +915,7 @@ def test_multi_kernel_single_omp_region(dist_mem):
assert "loop0_stop = mesh%get_last_edge_cell()" in code
else:
assert "loop0_stop = m2_proxy%vspace%get_ncell()" in code
- loop_str = "DO cell = loop0_start, loop0_stop"
+ loop_str = "do cell = loop0_start, loop0_stop"
for idx, line in enumerate(code.split('\n')):
if (cell_loop_idx == -1) and (loop_str in line):
cell_loop_idx = idx
@@ -924,7 +925,7 @@ def test_multi_kernel_single_omp_region(dist_mem):
omp_end_do_idx = idx
if "!$omp parallel default(shared), private(cell)" in line:
omp_para_idx = idx
- if "END DO" in line:
+ if "enddo" in line:
end_do_idx = idx
if "!$omp end parallel" in line:
omp_end_para_idx = idx
@@ -1083,16 +1084,16 @@ def test_loop_fuse(dist_mem):
assert "loop0_stop = mesh%get_last_halo_cell(1)" in gen
else:
assert "loop0_stop = f1_proxy%vspace%get_ncell()" in gen
- loop_str = "DO cell = loop0_start, loop0_stop"
+ loop_str = "do cell = loop0_start, loop0_stop"
for idx, line in enumerate(gen.split('\n')):
if loop_str in line:
cell_loop_idx = idx
- if "CALL testkern_code" in line:
+ if "call testkern_code" in line:
if call_idx1 == -1:
call_idx1 = idx
else:
call_idx2 = idx
- if "END DO" in line:
+ if "enddo" in line:
end_loop_idx = idx
assert cell_loop_idx != -1
@@ -1147,19 +1148,19 @@ def test_loop_fuse_omp(dist_mem):
assert "loop0_stop = mesh%get_last_edge_cell()" in code
else:
assert "loop0_stop = f1_proxy%vspace%get_ncell()" in code
- loop_str = "DO cell = loop0_start, loop0_stop"
+ loop_str = "do cell = loop0_start, loop0_stop"
for idx, line in enumerate(code.split('\n')):
if loop_str in line:
cell_do_idx = idx
if ("!$omp parallel do default(shared), private(cell), "
"schedule(static)" in line):
omp_para_idx = idx
- if "CALL testkern_w2v_code" in line:
+ if "call testkern_w2v_code" in line:
if call1_idx == -1:
call1_idx = idx
else:
call2_idx = idx
- if "END DO" in line:
+ if "enddo" in line:
cell_enddo_idx = idx
if "!$omp end parallel do" in line:
omp_endpara_idx = idx
@@ -1213,18 +1214,18 @@ def test_loop_fuse_omp_rwdisc(tmpdir, monkeypatch, annexed, dist_mem):
assert "loop0_stop = mesh%get_last_edge_cell()" in code
else:
assert "loop0_stop = m2_proxy%vspace%get_ncell()" in code
- loop_str = "DO cell = loop0_start, loop0_stop"
+ loop_str = "do cell = loop0_start, loop0_stop"
for idx, line in enumerate(code.split('\n')):
if loop_str in line:
cell_do_idx = idx
if "!$omp parallel do default(shared), " +\
"private(cell), schedule(static)" in line:
omp_para_idx = idx
- if "CALL testkern_w3_code" in line:
+ if "call testkern_w3_code" in line:
call1_idx = idx
- if "CALL testkern_anyd_any_space_code" in line:
+ if "call testkern_anyd_any_space_code" in line:
call2_idx = idx
- if "END DO" in line:
+ if "enddo" in line:
cell_enddo_idx = idx
if "!$omp end parallel do" in line:
omp_endpara_idx = idx
@@ -1294,41 +1295,40 @@ def test_fuse_colour_loops(tmpdir, monkeypatch, annexed, dist_mem):
lookup = "last_edge_cell_all_colours(colour)"
output = (
- f" DO colour = loop0_start, loop0_stop, 1\n"
- f" !$omp parallel default(shared), private(cell)\n"
- f" !$omp do schedule(static)\n"
- f" DO cell = loop1_start, {lookup}, 1\n"
- f" CALL ru_code(nlayers_a, a_data, b_data, "
+ f" do colour = loop0_start, loop0_stop, 1\n"
+ f" !$omp parallel default(shared), private(cell)\n"
+ f" !$omp do schedule(static)\n"
+ f" do cell = loop1_start, {lookup}, 1\n"
+ f" call ru_code(nlayers_a, a_data, b_data, "
f"istp, rdt, d_data, e_1_data, e_2_data, "
f"e_3_data, ndf_w2, undf_w2, map_w2(:,cmap(colour,"
f"cell)), basis_w2_qr, diff_basis_w2_qr, ndf_w3, undf_w3, "
f"map_w3(:,cmap(colour,cell)), basis_w3_qr, ndf_w0, undf_w0, "
f"map_w0(:,cmap(colour,cell)), basis_w0_qr, diff_basis_w0_qr, "
f"np_xy_qr, np_z_qr, weights_xy_qr, weights_z_qr)\n"
- f" END DO\n"
- f" !$omp end do\n"
- f" !$omp do schedule(static)\n"
- f" DO cell = loop2_start, {lookup}, 1\n"
- f" CALL ru_code(nlayers_f, f_data, b_data, "
+ f" enddo\n"
+ f" !$omp end do\n"
+ f" !$omp do schedule(static)\n"
+ f" do cell = loop2_start, {lookup}, 1\n"
+ f" call ru_code(nlayers_f, f_data, b_data, "
f"istp, rdt, d_data, e_1_data, e_2_data, "
f"e_3_data, ndf_w2, undf_w2, map_w2(:,cmap(colour,"
f"cell)), basis_w2_qr, diff_basis_w2_qr, ndf_w3, undf_w3, "
f"map_w3(:,cmap(colour,cell)), basis_w3_qr, ndf_w0, undf_w0, "
f"map_w0(:,cmap(colour,cell)), basis_w0_qr, diff_basis_w0_qr, "
f"np_xy_qr, np_z_qr, weights_xy_qr, weights_z_qr)\n"
- f" END DO\n"
- f" !$omp end do\n"
- f" !$omp end parallel\n"
- f" END DO\n")
+ f" enddo\n"
+ f" !$omp end do\n"
+ f" !$omp end parallel\n"
+ f" enddo\n")
assert output in code
if dist_mem:
set_dirty_str = (
- " ! Set halos dirty/clean for fields modified in the "
- "above loop\n"
- " !\n"
- " CALL a_proxy%set_dirty()\n"
- " CALL f_proxy%set_dirty()\n")
+ " ! Set halos dirty/clean for fields modified in the "
+ "above loop(s)\n"
+ " call a_proxy%set_dirty()\n"
+ " call f_proxy%set_dirty()\n")
assert set_dirty_str in code
assert code.count("set_dirty()") == 2
@@ -1359,29 +1359,25 @@ def test_loop_fuse_cma(tmpdir, dist_mem):
{"same_space": True})
code = str(psy.gen)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
assert (
- " ! Look-up required column-banded dofmaps\n"
- " !\n"
- " cbanded_map_aspc1_afield => "
+ " ! Look-up required column-banded dofmaps\n"
+ " cbanded_map_aspc1_afield => "
"cma_op1_proxy%column_banded_dofmap_to\n"
- " cbanded_map_aspc2_lma_op1 => "
+ " cbanded_map_aspc2_lma_op1 => "
"cma_op1_proxy%column_banded_dofmap_from\n") in code
assert (
- " ! Look-up information for each CMA operator\n"
- " !\n"
- " cma_op1_cma_matrix => cma_op1_proxy%columnwise_matrix\n"
- " cma_op1_nrow = cma_op1_proxy%nrow\n"
- " cma_op1_ncol = cma_op1_proxy%ncol\n"
- " cma_op1_bandwidth = cma_op1_proxy%bandwidth\n"
- " cma_op1_alpha = cma_op1_proxy%alpha\n"
- " cma_op1_beta = cma_op1_proxy%beta\n"
- " cma_op1_gamma_m = cma_op1_proxy%gamma_m\n"
- " cma_op1_gamma_p = cma_op1_proxy%gamma_p\n"
+ " ! Look-up information for each CMA operator\n"
+ " cma_op1_cma_matrix => cma_op1_proxy%columnwise_matrix\n"
+ " cma_op1_nrow = cma_op1_proxy%nrow\n"
+ " cma_op1_ncol = cma_op1_proxy%ncol\n"
+ " cma_op1_bandwidth = cma_op1_proxy%bandwidth\n"
+ " cma_op1_alpha = cma_op1_proxy%alpha\n"
+ " cma_op1_beta = cma_op1_proxy%beta\n"
+ " cma_op1_gamma_m = cma_op1_proxy%gamma_m\n"
+ " cma_op1_gamma_p = cma_op1_proxy%gamma_p\n"
) in code
assert (
- "CALL columnwise_op_asm_field_kernel_code(cell, nlayers_afield, "
+ "call columnwise_op_asm_field_kernel_code(cell, nlayers_afield, "
"ncell_2d, afield_data, lma_op1_proxy%ncell_3d, "
"lma_op1_local_stencil, cma_op1_cma_matrix(:,:,:), cma_op1_nrow, "
"cma_op1_ncol, cma_op1_bandwidth, cma_op1_alpha, cma_op1_beta, "
@@ -1389,11 +1385,12 @@ def test_loop_fuse_cma(tmpdir, dist_mem):
"undf_aspc1_afield, map_aspc1_afield(:,cell), "
"cbanded_map_aspc1_afield, ndf_aspc2_lma_op1, "
"cbanded_map_aspc2_lma_op1)\n"
- " CALL testkern_two_real_scalars_code(nlayers_afield, scalar1, "
+ " call testkern_two_real_scalars_code(nlayers_afield, scalar1, "
"afield_data, bfield_data, cfield_data, "
"dfield_data, scalar2, ndf_w1, undf_w1, map_w1(:,cell), "
"ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, undf_w3, "
"map_w3(:,cell))\n") in code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_omp_par_and_halo_exchange_error():
@@ -1449,31 +1446,30 @@ def test_builtin_single_omp_pdo(tmpdir, monkeypatch, annexed, dist_mem):
assert ("loop0_stop = f2_proxy%vspace%get_last_dof_owned()"
in result)
code = (
- " !$omp parallel do default(shared), private(df), "
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_X (set a real-valued field "
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_X (set a real-valued field "
"equal to another such field)\n"
- " f2_data(df) = f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " f2_data(df) = f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f2_proxy%set_dirty()")
+ " call f2_proxy%set_dirty()")
assert code in result
else: # not distmem. annexed can be True or False
assert "loop0_stop = undf_aspc1_f2" in result
assert (
- " !$omp parallel do default(shared), private(df), "
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_X (set a real-valued field "
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_X (set a real-valued field "
"equal to another such field)\n"
- " f2_data(df) = f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do") in result
+ " f2_data(df) = f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do") in result
def test_builtin_multiple_omp_pdo(tmpdir, monkeypatch, annexed, dist_mem):
@@ -1493,8 +1489,6 @@ def test_builtin_multiple_omp_pdo(tmpdir, monkeypatch, annexed, dist_mem):
otrans.apply(child)
result = str(psy.gen)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
if dist_mem: # annexed can be True or False
for idx in range(1, 4):
if annexed:
@@ -1505,80 +1499,72 @@ def test_builtin_multiple_omp_pdo(tmpdir, monkeypatch, annexed, dist_mem):
f"get_last_dof_{name}()" in result)
code = (
- " !$omp parallel do default(shared), private(df), "
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f1_data(df) = fred\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " f1_data(df) = fred\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
+ " call f1_proxy%set_dirty()\n"
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f2_data(df) = 3.0_r_def\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " f2_data(df) = 3.0_r_def\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f2_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
+ " call f2_proxy%set_dirty()\n"
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop2_start, loop2_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " do df = loop2_start, loop2_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f3_data(df) = ginger\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " f3_data(df) = ginger\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f3_proxy%set_dirty()")
+ " call f3_proxy%set_dirty()")
assert code in result
else: # not distmem. annexed can be True or False
for idx in range(1, 4):
assert f"loop{idx-1}_stop = undf_aspc1_f{idx}" in result
assert (
- " !$omp parallel do default(shared), private(df), "
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f1_data(df) = fred\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !$omp parallel do default(shared), private(df), "
+ " f1_data(df) = fred\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f2_data(df) = 3.0_r_def\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !$omp parallel do default(shared), private(df), "
+ " f2_data(df) = 3.0_r_def\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop2_start, loop2_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " do df = loop2_start, loop2_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f3_data(df) = ginger\n"
- " END DO\n"
- " !$omp end parallel do\n") in result
+ " f3_data(df) = ginger\n"
+ " enddo\n"
+ " !$omp end parallel do\n") in result
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_builtin_loop_fuse_pdo(tmpdir, monkeypatch, annexed, dist_mem):
@@ -1603,8 +1589,6 @@ def test_builtin_loop_fuse_pdo(tmpdir, monkeypatch, annexed, dist_mem):
otrans.apply(schedule.children[0])
result = str(psy.gen)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
if dist_mem: # annexed can be True or False
if annexed:
assert ("loop0_stop = f1_proxy%vspace%get_last_dof_annexed()"
@@ -1613,45 +1597,49 @@ def test_builtin_loop_fuse_pdo(tmpdir, monkeypatch, annexed, dist_mem):
assert ("loop0_stop = f1_proxy%vspace%get_last_dof_owned()"
in result)
code = (
- " !$omp parallel do default(shared), private(df), "
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f1_data(df) = fred\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f1_data(df) = fred\n"
+ "\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f2_data(df) = 3.0_r_def\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f2_data(df) = 3.0_r_def\n"
+ "\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f3_data(df) = ginger\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " f3_data(df) = ginger\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " CALL f2_proxy%set_dirty()\n"
- " CALL f3_proxy%set_dirty()")
+ " call f1_proxy%set_dirty()\n"
+ " call f2_proxy%set_dirty()\n"
+ " call f3_proxy%set_dirty()")
assert code in result
else: # distmem is False. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
assert (
- " !$omp parallel do default(shared), private(df), "
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f1_data(df) = fred\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f1_data(df) = fred\n"
+ "\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f2_data(df) = 3.0_r_def\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f2_data(df) = 3.0_r_def\n"
+ "\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f3_data(df) = ginger\n"
- " END DO\n"
- " !$omp end parallel do") in result
+ " f3_data(df) = ginger\n"
+ " enddo\n"
+ " !$omp end parallel do") in result
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_builtin_single_omp_do(tmpdir, monkeypatch, annexed, dist_mem):
@@ -1687,33 +1675,31 @@ def test_builtin_single_omp_do(tmpdir, monkeypatch, annexed, dist_mem):
assert ("loop0_stop = f2_proxy%vspace%get_last_dof_owned()"
in result)
assert (
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_X (set a real-valued field equal to "
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_X (set a real-valued field equal to "
"another such field)\n"
- " f2_data(df) = f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " f2_data(df) = f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f2_proxy%set_dirty()\n"
- " !\n") in result
+ " call f2_proxy%set_dirty()\n") in result
else: # distmem is False. annexed can be True or False
assert "loop0_stop = undf_aspc1_f2" in result
assert (
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_X (set a real-valued field equal to "
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_X (set a real-valued field equal to "
"another such field)\n"
- " f2_data(df) = f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n") in result
+ " f2_data(df) = f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n") in result
def test_builtin_multiple_omp_do(tmpdir, monkeypatch, annexed, dist_mem):
@@ -1757,65 +1743,65 @@ def test_builtin_multiple_omp_do(tmpdir, monkeypatch, annexed, dist_mem):
assert ("loop2_stop = f3_proxy%vspace%get_last_dof_owned()"
in result)
code = (
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f1_data(df) = fred\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f1_data(df) = fred\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f2_data(df) = 3.0_r_def\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop2_start, loop2_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f2_data(df) = 3.0_r_def\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop2_start, loop2_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f3_data(df) = ginger\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " f3_data(df) = ginger\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " CALL f2_proxy%set_dirty()\n"
- " CALL f3_proxy%set_dirty()\n")
+ " call f3_proxy%set_dirty()\n"
+ " call f2_proxy%set_dirty()\n"
+ " call f1_proxy%set_dirty()\n"
+ )
assert code in result
else: # distmem is False. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
assert "loop1_stop = undf_aspc1_f2" in result
assert "loop2_stop = undf_aspc1_f3" in result
assert (
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f1_data(df) = fred\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f1_data(df) = fred\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f2_data(df) = 3.0_r_def\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop2_start, loop2_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f2_data(df) = 3.0_r_def\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop2_start, loop2_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f3_data(df) = ginger\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel") in result
+ " f3_data(df) = ginger\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel") in result
def test_builtin_loop_fuse_do(tmpdir, monkeypatch, annexed, dist_mem):
@@ -1854,48 +1840,50 @@ def test_builtin_loop_fuse_do(tmpdir, monkeypatch, annexed, dist_mem):
assert ("loop0_stop = f1_proxy%vspace%get_last_dof_owned()"
in result)
code = (
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f1_data(df) = fred\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f1_data(df) = fred\n"
+ "\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f2_data(df) = 3.0_r_def\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f2_data(df) = 3.0_r_def\n"
+ "\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f3_data(df) = ginger\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " f3_data(df) = ginger\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " CALL f2_proxy%set_dirty()\n"
- " CALL f3_proxy%set_dirty()\n"
- " !\n")
+ " call f1_proxy%set_dirty()\n"
+ " call f2_proxy%set_dirty()\n"
+ " call f3_proxy%set_dirty()\n")
assert code in result
else: # distmem is False. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
assert (
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f1_data(df) = fred\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f1_data(df) = fred\n"
+ "\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f2_data(df) = 3.0_r_def\n"
- " ! Built-in: setval_c (set a real-valued field to "
+ " f2_data(df) = 3.0_r_def\n"
+ "\n"
+ " ! Built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f3_data(df) = ginger\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel") in result
+ " f3_data(df) = ginger\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel") in result
def test_reduction_real_pdo(tmpdir, dist_mem):
@@ -1916,26 +1904,28 @@ def test_reduction_real_pdo(tmpdir, dist_mem):
if dist_mem:
assert "loop0_stop = f1_proxy%vspace%get_last_dof_owned()" in code
assert (
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n") in code
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n") in code
else:
assert "loop0_stop = undf_aspc1_f1" in code
assert (
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n") in code
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n") in code
def test_reduction_real_do(tmpdir, dist_mem):
@@ -1959,27 +1949,29 @@ def test_reduction_real_do(tmpdir, dist_mem):
if dist_mem:
assert "loop0_stop = f1_proxy%vspace%get_last_dof_owned()\n" in code
assert (
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n") in code
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n") in code
else:
assert "loop0_stop = undf_aspc1_f1\n" in code
assert (
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n") in code
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n") in code
def test_multi_reduction_real_pdo(tmpdir, dist_mem):
@@ -2003,58 +1995,59 @@ def test_multi_reduction_real_pdo(tmpdir, dist_mem):
assert "loop0_stop = f1_proxy%vspace%get_last_dof_owned()\n" in code
assert "loop1_stop = f1_proxy%vspace%get_last_dof_owned()\n" in code
assert (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n"
- " !\n"
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n") in code
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n"
+ "\n"
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n") in code
else:
assert "loop0_stop = undf_aspc1_f1\n" in code
assert "loop1_stop = undf_aspc1_f1\n" in code
assert (
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n") in code
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n") in code
def test_reduction_after_normal_real_do(tmpdir, monkeypatch, annexed,
@@ -2095,57 +2088,56 @@ def test_reduction_after_normal_real_do(tmpdir, monkeypatch, annexed,
assert ("loop1_stop = f1_proxy%vspace%get_last_dof_owned()"
in result)
expected_output = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " asum = asum + f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bvalue * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " asum = asum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()")
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()")
+ assert expected_output in result
else: # not distmem. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
assert "loop1_stop = undf_aspc1_f1" in result
expected_output = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " asum = asum + f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel")
- assert expected_output in result
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bvalue * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " asum = asum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel")
+ assert expected_output in result
def test_reprod_red_after_normal_real_do(tmpdir, monkeypatch, annexed,
@@ -2183,76 +2175,73 @@ def test_reprod_red_after_normal_real_do(tmpdir, monkeypatch, annexed,
in result)
assert "loop1_stop = f1_proxy%vspace%get_last_dof_owned()" in result
expected_output = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bvalue * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()")
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()")
+ assert expected_output in result
else: # not distmem. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
assert "loop1_stop = undf_aspc1_f1" in result
expected_output = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n")
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bvalue * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n")
assert expected_output in result
@@ -2287,52 +2276,56 @@ def test_two_reductions_real_do(tmpdir, dist_mem):
assert "loop0_stop = f1_proxy%vspace%get_last_dof_owned()\n" in result
assert "loop1_stop = f1_proxy%vspace%get_last_dof_owned()\n" in result
expected_output = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " bsum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static), reduction(+:bsum)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " bsum = bsum + f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n"
- " global_sum%value = bsum\n"
- " bsum = global_sum%get_sum()")
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ " bsum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static), reduction(+:bsum)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " bsum = bsum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = bsum\n"
+ " bsum = global_sum%get_sum()")
else:
assert "loop0_stop = undf_aspc1_f1" in result
assert "loop1_stop = undf_aspc1_f1" in result
expected_output = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " bsum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static), reduction(+:bsum)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " bsum = bsum + f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel")
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ " bsum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static), reduction(+:bsum)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " bsum = bsum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel")
assert expected_output in result
@@ -2367,86 +2360,92 @@ def test_two_reprod_reductions_real_do(tmpdir, dist_mem):
assert "loop0_stop = f1_proxy%vspace%get_last_dof_owned()" in result
assert "loop1_stop = f1_proxy%vspace%get_last_dof_owned()" in result
expected_output = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " bsum = 0.0_r_def\n"
- " ALLOCATE (l_bsum(8,nthreads))\n"
- " l_bsum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + "
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ " bsum = 0.0\n"
+ " ALLOCATE(l_bsum(8,nthreads))\n"
+ " l_bsum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + "
"f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " l_bsum(1,th_idx) = l_bsum(1,th_idx) + f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n"
- " DO th_idx=1,nthreads\n"
- " bsum = bsum+l_bsum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_bsum)\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n"
- " global_sum%value = bsum\n"
- " bsum = global_sum%get_sum()")
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " l_bsum(1,th_idx) = l_bsum(1,th_idx) + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " bsum = bsum + l_bsum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_bsum)\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = bsum\n"
+ " bsum = global_sum%get_sum()")
else:
assert "loop0_stop = undf_aspc1_f1" in result
assert "loop1_stop = undf_aspc1_f1" in result
expected_output = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " bsum = 0.0_r_def\n"
- " ALLOCATE (l_bsum(8,nthreads))\n"
- " l_bsum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + "
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ " bsum = 0.0\n"
+ " ALLOCATE(l_bsum(8,nthreads))\n"
+ " l_bsum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + "
"f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " l_bsum(1,th_idx) = l_bsum(1,th_idx) + f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n"
- " DO th_idx=1,nthreads\n"
- " bsum = bsum+l_bsum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_bsum)")
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " l_bsum(1,th_idx) = l_bsum(1,th_idx) + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " bsum = bsum + l_bsum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_bsum)")
assert expected_output in result
@@ -2473,7 +2472,7 @@ def test_multi_reduction_same_name_real_do():
# in general it could be valid to move the global sum
del schedule.children[1]
rtrans.apply(schedule.children[0:2])
- with pytest.raises(GenerationError) as excinfo:
+ with pytest.raises(VisitorError) as excinfo:
_ = str(psy.gen)
assert (
"Reduction variables can only be used once in an "
@@ -2527,60 +2526,60 @@ def test_multi_different_reduction_real_pdo(tmpdir, dist_mem):
assert "loop0_stop = f1_proxy%vspace%get_last_dof_owned()" in code
assert "loop1_stop = f1_proxy%vspace%get_last_dof_owned()" in code
assert (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n"
- " !\n"
- " ! Zero summation variables\n"
- " !\n"
- " bsum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:bsum)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " bsum = bsum + f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " global_sum%value = bsum\n"
- " bsum = global_sum%get_sum()\n") in code
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n"
+ "\n"
+ " ! Zero summation variables\n"
+ " bsum = 0.0\n"
+ " !$omp parallel do reduction(+:bsum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " bsum = bsum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = bsum\n"
+ " bsum = global_sum%get_sum()\n") in code
else:
assert "loop0_stop = undf_aspc1_f1" in code
assert "loop1_stop = undf_aspc1_f1" in code
assert (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Zero summation variables\n"
- " !\n"
- " bsum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:bsum)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " bsum = bsum + f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n") in code
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Zero summation variables\n"
+ " bsum = 0.0\n"
+ " !$omp parallel do reduction(+:bsum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " bsum = bsum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n") in code
def test_multi_builtins_red_then_pdo(tmpdir, monkeypatch, annexed, dist_mem):
@@ -2613,54 +2612,55 @@ def test_multi_builtins_red_then_pdo(tmpdir, monkeypatch, annexed, dist_mem):
assert ("loop1_stop = f1_proxy%vspace%get_last_dof_owned()"
in result)
code = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n"
- " !$omp parallel do default(shared), private(df), "
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n"
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n")
+ " call f1_proxy%set_dirty()\n")
assert code in result
else: # not distmem. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
assert "loop1_stop = undf_aspc1_f1" in result
assert (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !$omp parallel do default(shared), private(df), "
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n") in result
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n") in result
def test_multi_builtins_red_then_do(tmpdir, monkeypatch, annexed, dist_mem):
@@ -2699,34 +2699,32 @@ def test_multi_builtins_red_then_do(tmpdir, monkeypatch, annexed, dist_mem):
assert ("loop1_stop = f1_proxy%vspace%get_last_dof_owned()"
in result)
code = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n")
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n")
if not annexed:
code = code.replace("dof_annexed", "dof_owned")
assert code in result
@@ -2734,24 +2732,24 @@ def test_multi_builtins_red_then_do(tmpdir, monkeypatch, annexed, dist_mem):
assert "loop0_stop = undf_aspc1_f1" in result
assert "loop1_stop = undf_aspc1_f1" in result
assert (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n") in result
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n") in result
def test_multi_builtins_red_then_fuse_pdo(tmpdir, monkeypatch, annexed,
@@ -2793,47 +2791,47 @@ def test_multi_builtins_red_then_fuse_pdo(tmpdir, monkeypatch, annexed,
assert ("loop0_stop = f1_proxy%vspace%get_last_dof_owned()" in
result)
code = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ "\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)"
"\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n")
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n")
else: # not distmem. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
code = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
"\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n")
+ " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ "\n"
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n")
assert code in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -2879,45 +2877,47 @@ def test_multi_builtins_red_then_fuse_do(tmpdir, monkeypatch, annexed,
assert ("loop0_stop = f1_proxy%vspace%get_last_dof_owned()"
in result)
code = (
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
"\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ "\n"
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n")
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n")
else: # not distmem, annexed is True or False
assert "loop0_stop = undf_aspc1_f1" in result
code = (
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " asum = asum + f1_data(df) * f2_data(df)\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ " asum = 0.0\n"
"\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n")
+ " ! Call kernels\n"
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " asum = asum + f1_data(df) * f2_data(df)\n"
+ "\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ "\n"
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n")
assert code in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -2954,59 +2954,53 @@ def test_multi_builtins_usual_then_red_pdo(tmpdir, monkeypatch, annexed,
in result)
assert "loop1_stop = f1_proxy%vspace%get_last_dof_owned()" in result
code = (
- " !$omp parallel do default(shared), private(df), "
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bvalue * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " !\n"
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " asum = asum + f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n")
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " asum = asum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n")
assert code in result
else: # not distmem. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
assert "loop1_stop = undf_aspc1_f1" in result
assert (
- " !$omp parallel do default(shared), private(df), "
+ " !$omp parallel do default(shared), private(df), "
"schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " asum = asum + f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n") in result
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bvalue * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " asum = asum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n") in result
def test_builtins_usual_then_red_fuse_pdo(tmpdir, monkeypatch, annexed,
@@ -3041,47 +3035,47 @@ def test_builtins_usual_then_red_fuse_pdo(tmpdir, monkeypatch, annexed,
assert ("loop0_stop = f1_proxy%vspace%get_last_dof_owned()"
in result)
code = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ "\n"
+ " f1_data(df) = bvalue * f1_data(df)\n"
"\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " asum = asum + f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " asum = asum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n")
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n")
else: # not distmem. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
code = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel do default(shared), private(df), "
- "schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel do reduction(+:asum) default(shared), "
+ "private(df), schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)"
"\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " asum = asum + f1_data(df)\n"
- " END DO\n"
- " !$omp end parallel do\n")
+ " f1_data(df) = bvalue * f1_data(df)\n"
+ "\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " asum = asum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end parallel do\n")
assert code in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -3121,45 +3115,45 @@ def test_builtins_usual_then_red_fuse_do(tmpdir, monkeypatch, annexed,
assert ("loop0_stop = f1_proxy%vspace%get_last_dof_owned()"
in result)
code = (
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bvalue * f1_data(df)\n"
+ "\n"
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " asum = asum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
"\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " asum = asum + f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n")
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n")
else: # not distmem. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
code = (
- " asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df)\n"
- " !$omp do schedule(static), reduction(+:asum)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ " asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel default(shared), private(df)\n"
+ " !$omp do schedule(static), reduction(+:asum)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bvalue * f1_data(df)\n"
"\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " asum = asum + f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n")
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " asum = asum + f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n")
assert code in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -3226,72 +3220,69 @@ def test_reprod_reduction_real_do(tmpdir, dist_mem):
assert LFRicBuild(tmpdir).code_compiles(psy)
+ assert ("use omp_lib, only : omp_get_max_threads, "
+ "omp_get_thread_num\n") in code
+ assert ("real(kind=r_def), allocatable, dimension(:,:) "
+ ":: l_asum\n") in code
+ assert "integer :: th_idx\n" in code
+ assert "integer :: nthreads\n" in code
assert (
- " USE omp_lib, ONLY: omp_get_thread_num\n"
- " USE omp_lib, ONLY: omp_get_max_threads\n") in code
- assert (
- " REAL(KIND=r_def), allocatable, dimension(:,:) "
- ":: l_asum\n") in code
- assert " INTEGER th_idx\n" in code
- assert " INTEGER nthreads\n" in code
- assert (
- " !\n"
- " ! Determine the number of OpenMP threads\n"
- " !\n"
- " nthreads = omp_get_max_threads()\n"
- " !\n") in code
+ " ! Determine the number of OpenMP threads\n"
+ " nthreads = omp_get_max_threads()\n"
+ "\n") in code
if dist_mem:
assert "loop0_stop = f1_proxy%vspace%get_last_dof_owned()" in code
assert (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df) "
+ " ! Zero summation variables\n"
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df) "
"* f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()") in code
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()") in code
else:
assert "loop0_stop = undf_aspc1_f1" in code
assert (
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df) "
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df) "
"* f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n") in code
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n") in code
def test_no_global_sum_in_parallel_region():
@@ -3309,10 +3300,11 @@ def test_no_global_sum_in_parallel_region():
if isinstance(child, Loop):
otrans.apply(child, {"reprod": True})
rtrans.apply(schedule.children)
- with pytest.raises(NotImplementedError) as excinfo:
+ with pytest.raises(VisitorError) as excinfo:
_ = str(psy.gen)
assert ("Cannot correctly generate code for an OpenMP parallel region "
- "containing children of different types") in str(excinfo.value)
+ "with reductions and containing children of different types"
+ in str(excinfo.value))
def test_reprod_builtins_red_then_usual_do(tmpdir, monkeypatch, annexed,
@@ -3343,20 +3335,16 @@ def test_reprod_builtins_red_then_usual_do(tmpdir, monkeypatch, annexed,
assert LFRicBuild(tmpdir).code_compiles(psy)
+ assert ("use omp_lib, only : omp_get_max_threads, "
+ "omp_get_thread_num\n") in result
+ assert ("real(kind=r_def), allocatable, dimension(:,:) "
+ ":: l_asum\n") in result
+ assert "integer :: th_idx\n" in result
+ assert "integer :: nthreads\n" in result
assert (
- " USE omp_lib, ONLY: omp_get_thread_num\n"
- " USE omp_lib, ONLY: omp_get_max_threads\n") in result
- assert (
- " REAL(KIND=r_def), allocatable, dimension(:,:) "
- ":: l_asum\n") in result
- assert " INTEGER th_idx\n" in result
- assert " INTEGER nthreads\n" in result
- assert (
- " !\n"
- " ! Determine the number of OpenMP threads\n"
- " !\n"
- " nthreads = omp_get_max_threads()\n"
- " !\n") in result
+ " ! Determine the number of OpenMP threads\n"
+ " nthreads = omp_get_max_threads()\n"
+ "\n") in result
if dist_mem: # annexed can be True or False
assert "loop0_stop = f1_proxy%vspace%get_last_dof_owned()" in result
if annexed:
@@ -3366,75 +3354,73 @@ def test_reprod_builtins_red_then_usual_do(tmpdir, monkeypatch, annexed,
assert ("loop1_stop = f1_proxy%vspace%get_last_dof_owned()"
in result)
code = (
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df) "
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df) "
"* f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n")
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n")
assert code in result
else: # not distmem. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
assert "loop1_stop = undf_aspc1_f1" in result
assert (
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df) "
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + f1_data(df) "
"* f2_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp do schedule(static)\n"
- " DO df = loop1_start, loop1_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n") in result
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop1_start, loop1_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n") in result
def test_repr_bltins_red_then_usual_fuse_do(tmpdir, monkeypatch, annexed,
@@ -3473,85 +3459,79 @@ def test_repr_bltins_red_then_usual_fuse_do(tmpdir, monkeypatch, annexed,
rtrans.apply(schedule.children[0])
result = str(psy.gen)
+ assert ("use omp_lib, only : omp_get_max_threads, "
+ "omp_get_thread_num\n") in result
+ assert ("real(kind=r_def), allocatable, dimension(:,:) "
+ ":: l_asum\n") in result
+ assert "integer :: th_idx\n" in result
+ assert "integer :: nthreads\n" in result
assert (
- " USE omp_lib, ONLY: omp_get_thread_num\n"
- " USE omp_lib, ONLY: omp_get_max_threads\n") in result
- assert (
- " REAL(KIND=r_def), allocatable, dimension(:,:) "
- ":: l_asum\n") in result
- assert " INTEGER th_idx\n" in result
- assert " INTEGER nthreads\n" in result
- assert (
- " !\n"
- " ! Determine the number of OpenMP threads\n"
- " !\n"
- " nthreads = omp_get_max_threads()\n"
- " !\n") in result
+ " ! Determine the number of OpenMP threads\n"
+ " nthreads = omp_get_max_threads()\n"
+ "\n") in result
if dist_mem: # annexed is False here
assert ("loop0_stop = f1_proxy%vspace%get_last_dof_owned()"
in result)
assert (
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + "
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + "
"f1_data(df) * f2_data(df)\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
"\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n") in result
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n") in result
else: # not distmem. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
assert (
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + "
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: X_innerproduct_Y (real-valued fields)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + "
"f1_data(df) * f2_data(df)\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
"\n"
- " f1_data(df) = bsum * f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n") in result
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bsum * f1_data(df)\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n") in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -3586,72 +3566,70 @@ def test_repr_bltins_usual_then_red_fuse_do(tmpdir, monkeypatch, annexed,
rtrans.apply(schedule.children[0])
result = str(psy.gen)
- assert " INTEGER th_idx\n" in result
+ assert "integer :: th_idx\n" in result
if dist_mem: # annexed is False here
assert ("loop0_stop = f1_proxy%vspace%get_last_dof_owned()"
in result)
assert (
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bvalue * f1_data(df)\n"
"\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + "
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + "
"f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
"above loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " ! End of set dirty/clean section for above loop(s)\n"
- " !\n"
- " global_sum%value = asum\n"
- " asum = global_sum%get_sum()\n") in result
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = asum\n"
+ " asum = global_sum%get_sum()\n") in result
else: # distmem is False. annexed can be True or False
assert "loop0_stop = undf_aspc1_f1" in result
assert (
- " asum = 0.0_r_def\n"
- " ALLOCATE (l_asum(8,nthreads))\n"
- " l_asum = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop0_start, loop0_stop, 1\n"
- " ! Built-in: inc_a_times_X (scale a real-valued field)"
+ " asum = 0.0\n"
+ " ALLOCATE(l_asum(8,nthreads))\n"
+ " l_asum = 0.0\n"
+ "\n"
+ " ! Call kernels\n"
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop0_start, loop0_stop, 1\n"
+ " ! Built-in: inc_a_times_X (scale a real-valued field)\n"
+ " f1_data(df) = bvalue * f1_data(df)\n"
"\n"
- " f1_data(df) = bvalue * f1_data(df)\n"
- " ! Built-in: sum_X (sum a real-valued field)\n"
- " l_asum(1,th_idx) = l_asum(1,th_idx) + "
+ " ! Built-in: sum_X (sum a real-valued field)\n"
+ " l_asum(1,th_idx) = l_asum(1,th_idx) + "
"f1_data(df)\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " asum = asum+l_asum(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (l_asum)\n") in result
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " asum = asum + l_asum(1,th_idx)\n"
+ " enddo\n"
+ " DEALLOCATE(l_asum)\n") in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -3675,7 +3653,7 @@ def test_repr_3_builtins_2_reductions_do(tmpdir, dist_mem):
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert "INTEGER th_idx\n" in code
+ assert "integer :: th_idx\n" in code
if dist_mem:
assert "loop0_stop = f1_proxy%vspace%get_last_dof_owned()" in code
assert "loop1_stop = f1_proxy%vspace%get_last_dof_owned()" in code
@@ -3690,31 +3668,32 @@ def test_repr_3_builtins_2_reductions_do(tmpdir, dist_mem):
"rhs": "f2_data(df)",
"builtin": "! Built-in: sum_X (sum a real-valued field)"}]:
assert (
- " " + names["var"] + " = 0.0_r_def\n"
- " ALLOCATE (" + names["lvar"] + "(8,nthreads))\n"
- " " + names["lvar"] + " = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop"+names["loop_idx"]+"_start, "
+ " " + names["var"] + " = 0.0\n"
+ " ALLOCATE(" + names["lvar"] + "(8,nthreads))\n"
+ " " + names["lvar"] + " = 0.0\n") in code
+ assert (
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop"+names["loop_idx"]+"_start, "
"loop"+names["loop_idx"]+"_stop, 1\n"
- " " + names["builtin"] + "\n"
- " " + names["lvar"] + "(1,th_idx) = " +
+ " " + names["builtin"] + "\n"
+ " " + names["lvar"] + "(1,th_idx) = " +
names["lvar"] + "(1,th_idx) + " + names["rhs"] + "\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " " + names["var"] + " = " + names["var"] + "+" +
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " " + names["var"] + " = " + names["var"] + " + " +
names["lvar"] + "(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (" + names["lvar"] + ")\n"
- " global_sum%value = " + names["var"] + "\n"
- " " + names["var"] + " = "
+ " enddo\n"
+ " DEALLOCATE(" + names["lvar"] + ")\n"
+ "\n"
+ " ! Perform global sum\n"
+ " global_sum%value = " + names["var"] + "\n"
+ " " + names["var"] + " = "
"global_sum%get_sum()\n") in code
else:
assert "loop0_stop = undf_aspc1_f1" in code
@@ -3730,29 +3709,29 @@ def test_repr_3_builtins_2_reductions_do(tmpdir, dist_mem):
"loop_idx": "2", "rhs": "f2_data(df)",
"builtin": "! Built-in: sum_X (sum a real-valued field)"}]:
assert (
- " " + names["var"] + " = 0.0_r_def\n"
- " ALLOCATE (" + names["lvar"] + "(8,nthreads))\n"
- " " + names["lvar"] + " = 0.0_r_def\n"
- " !\n"
- " !$omp parallel default(shared), private(df,th_idx)\n"
- " th_idx = omp_get_thread_num()+1\n"
- " !$omp do schedule(static)\n"
- " DO df = loop"+names["loop_idx"]+"_start, "
+ " " + names["var"] + " = 0.0\n"
+ " ALLOCATE(" + names["lvar"] + "(8,nthreads))\n"
+ " " + names["lvar"] + " = 0.0\n") in code
+ expected = (
+ " !$omp parallel default(shared), private(df,th_idx)\n"
+ " th_idx = omp_get_thread_num() + 1\n"
+ " !$omp do schedule(static)\n"
+ " do df = loop"+names["loop_idx"]+"_start, "
"loop" + names["loop_idx"]+"_stop, 1\n"
- " " + names["builtin"] + "\n"
- " " + names["lvar"] + "(1,th_idx) = " +
+ " " + names["builtin"] + "\n"
+ " " + names["lvar"] + "(1,th_idx) = " +
names["lvar"] + "(1,th_idx) + " + names["rhs"] + "\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " !\n"
- " ! sum the partial results sequentially\n"
- " !\n"
- " DO th_idx=1,nthreads\n"
- " " + names["var"] + " = " + names["var"] + "+" +
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ "\n"
+ " ! sum the partial results sequentially\n"
+ " do th_idx = 1, nthreads, 1\n"
+ " " + names["var"] + " = " + names["var"] + " + " +
names["lvar"] + "(1,th_idx)\n"
- " END DO\n"
- " DEALLOCATE (" + names["lvar"] + ")\n") in code
+ " enddo\n"
+ " DEALLOCATE(" + names["lvar"] + ")\n")
+ assert expected in code
def test_reprod_view(monkeypatch, annexed, dist_mem):
@@ -3776,6 +3755,7 @@ def test_reprod_view(monkeypatch, annexed, dist_mem):
call = colored("BuiltIn", BuiltIn._colour)
sched = colored("Schedule", Schedule._colour)
lit = colored("Literal", Literal._colour)
+ ref = colored("Reference", Reference._colour)
lit_one = lit + "[value:'1', Scalar]\n"
indent = " "
@@ -3801,10 +3781,8 @@ def test_reprod_view(monkeypatch, annexed, dist_mem):
5*indent + "0: " + loop + "[type='dof', "
"field_space='any_space_1', it_space='dof', "
"upper_bound='ndofs']\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
+ 6*indent + ref + "[name:'loop0_start']\n" +
+ 6*indent + ref + "[name:'loop0_stop']\n" +
6*indent + lit_one +
6*indent + sched + "[]\n" +
7*indent + "0: " + call + " x_innerproduct_y(asum,f1,f2)\n" +
@@ -3819,10 +3797,8 @@ def test_reprod_view(monkeypatch, annexed, dist_mem):
5*indent + "0: " + loop + "[type='dof', "
"field_space='any_space_1', it_space='dof', "
"upper_bound='nannexed']\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
+ 6*indent + ref + "[name:'loop1_start']\n" +
+ 6*indent + ref + "[name:'loop1_stop']\n" +
6*indent + lit_one +
6*indent + sched + "[]\n" +
7*indent + "0: " + call + " inc_a_times_x(asum,f1)\n" +
@@ -3836,10 +3812,8 @@ def test_reprod_view(monkeypatch, annexed, dist_mem):
5*indent + "0: " + loop + "[type='dof', "
"field_space='any_space_1', it_space='dof', "
"upper_bound='ndofs']\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
+ 6*indent + ref + "[name:'loop2_start']\n" +
+ 6*indent + ref + "[name:'loop2_stop']\n" +
6*indent + lit_one +
6*indent + sched + "[]\n" +
7*indent + "0: " + call + " sum_x(bsum,f2)\n" +
@@ -3859,10 +3833,8 @@ def test_reprod_view(monkeypatch, annexed, dist_mem):
5*indent + "0: " + loop + "[type='dof', "
"field_space='any_space_1', it_space='dof', "
"upper_bound='ndofs']\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
+ 6*indent + ref + "[name:'loop0_start']\n" +
+ 6*indent + ref + "[name:'loop0_stop']\n" +
6*indent + lit_one +
6*indent + sched + "[]\n" +
7*indent + "0: " + call + " x_innerproduct_y(asum,f1,f2)\n" +
@@ -3876,10 +3848,8 @@ def test_reprod_view(monkeypatch, annexed, dist_mem):
5*indent + "0: " + loop + "[type='dof', "
"field_space='any_space_1', it_space='dof', "
"upper_bound='ndofs']\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
+ 6*indent + ref + "[name:'loop1_start']\n" +
+ 6*indent + ref + "[name:'loop1_stop']\n" +
6*indent + lit_one +
6*indent + sched + "[]\n" +
7*indent + "0: " + call + " inc_a_times_x(asum,f1)\n" +
@@ -3893,10 +3863,8 @@ def test_reprod_view(monkeypatch, annexed, dist_mem):
5*indent + "0: " + loop + "[type='dof', "
"field_space='any_space_1', it_space='dof', "
"upper_bound='ndofs']\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
- 6*indent + lit + "[value:'NOT_INITIALISED', " +
- "Scalar]\n" +
+ 6*indent + ref + "[name:'loop2_start']\n" +
+ 6*indent + ref + "[name:'loop2_stop']\n" +
6*indent + lit_one +
6*indent + sched + "[]\n" +
7*indent + "0: " + call + " sum_x(bsum,f2)\n" +
@@ -4210,12 +4178,12 @@ def test_rc_continuous_depth():
result = str(psy.gen)
for field_name in ["f2", "m1", "m2"]:
- assert f"IF ({field_name}_proxy%is_dirty(depth=3)) THEN" in result
- assert f"CALL {field_name}_proxy%halo_exchange(depth=3)" in result
+ assert f"if ({field_name}_proxy%is_dirty(depth=3)) then" in result
+ assert f"call {field_name}_proxy%halo_exchange(depth=3)" in result
assert "loop0_stop = mesh%get_last_halo_cell(3)" in result
- assert "DO cell = loop0_start, loop0_stop" in result
- assert (" CALL f1_proxy%set_dirty()\n"
- " CALL f1_proxy%set_clean(2)") in result
+ assert "do cell = loop0_start, loop0_stop" in result
+ assert (" call f1_proxy%set_dirty()\n"
+ " call f1_proxy%set_clean(2)") in result
def test_rc_continuous_no_depth():
@@ -4234,19 +4202,19 @@ def test_rc_continuous_no_depth():
rc_trans.apply(loop)
result = str(psy.gen)
- assert (" IF (f1_proxy%is_dirty(depth=max_halo_depth_mesh - 1)) THEN"
+ assert (" if (f1_proxy%is_dirty(depth=max_halo_depth_mesh - 1)) then"
"\n"
- " CALL f1_proxy%halo_exchange(depth=max_halo_depth_mesh"
+ " call f1_proxy%halo_exchange(depth=max_halo_depth_mesh"
" - 1)" in result)
for fname in ["f2", "m1", "m2"]:
- assert (f" IF ({fname}_proxy%is_dirty(depth=max_halo_depth_mesh"
- f")) THEN\n"
- f" CALL {fname}_proxy%halo_exchange(depth=max_halo_"
+ assert (f" if ({fname}_proxy%is_dirty(depth=max_halo_depth_mesh"
+ f")) then\n"
+ f" call {fname}_proxy%halo_exchange(depth=max_halo_"
f"depth_mesh)" in result)
assert "loop0_stop = mesh%get_last_halo_cell()" in result
- assert "DO cell = loop0_start, loop0_stop" in result
- assert (" CALL f1_proxy%set_dirty()\n"
- " CALL f1_proxy%set_clean(max_halo_depth_mesh - 1)") in result
+ assert "do cell = loop0_start, loop0_stop" in result
+ assert (" call f1_proxy%set_dirty()\n"
+ " call f1_proxy%set_clean(max_halo_depth_mesh - 1)") in result
def test_rc_discontinuous_depth(tmpdir, monkeypatch, annexed):
@@ -4274,13 +4242,13 @@ def test_rc_discontinuous_depth(tmpdir, monkeypatch, annexed):
rc_trans.apply(loop, {"depth": 3})
result = str(psy.gen)
for field_name in ["f1", "f2", "m1"]:
- assert (f" IF ({field_name}_proxy%is_dirty(depth=3)) THEN\n"
- f" CALL {field_name}_proxy%halo_exchange(depth=3)"
+ assert (f" if ({field_name}_proxy%is_dirty(depth=3)) then\n"
+ f" call {field_name}_proxy%halo_exchange(depth=3)"
in result)
assert "loop0_stop = mesh%get_last_halo_cell(3)" in result
- assert "DO cell = loop0_start, loop0_stop" in result
- assert (" CALL m2_proxy%set_dirty()\n"
- " CALL m2_proxy%set_clean(3)") in result
+ assert "do cell = loop0_start, loop0_stop" in result
+ assert (" call m2_proxy%set_dirty()\n"
+ " call m2_proxy%set_clean(3)") in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -4311,14 +4279,14 @@ def test_rc_discontinuous_no_depth(monkeypatch, annexed):
result = str(psy.gen)
for field_name in ["f1", "f2", "m1"]:
- assert (f"IF ({field_name}_proxy%is_dirty(depth=max_halo_depth_mesh)) "
- f"THEN" in result)
- assert (f"CALL {field_name}_proxy%halo_exchange("
+ assert (f"if ({field_name}_proxy%is_dirty(depth=max_halo_depth_mesh)) "
+ f"then" in result)
+ assert (f"call {field_name}_proxy%halo_exchange("
f"depth=max_halo_depth_mesh)" in result)
assert "loop0_stop = mesh%get_last_halo_cell()" in result
- assert "DO cell = loop0_start, loop0_stop" in result
- assert "CALL m2_proxy%set_dirty()" not in result
- assert "CALL m2_proxy%set_clean(max_halo_depth_mesh)" in result
+ assert "do cell = loop0_start, loop0_stop" in result
+ assert "call m2_proxy%set_dirty()" not in result
+ assert "call m2_proxy%set_clean(max_halo_depth_mesh)" in result
def test_rc_all_discontinuous_depth(tmpdir):
@@ -4334,12 +4302,12 @@ def test_rc_all_discontinuous_depth(tmpdir):
loop = schedule.children[0]
rc_trans.apply(loop, {"depth": 3})
result = str(psy.gen)
- assert "IF (f2_proxy%is_dirty(depth=3)) THEN" in result
- assert "CALL f2_proxy%halo_exchange(depth=3)" in result
+ assert "if (f2_proxy%is_dirty(depth=3)) then" in result
+ assert "call f2_proxy%halo_exchange(depth=3)" in result
assert "loop0_stop = mesh%get_last_halo_cell(3)" in result
- assert "DO cell = loop0_start, loop0_stop" in result
- assert "CALL f1_proxy%set_dirty()" in result
- assert "CALL f1_proxy%set_clean(3)" in result
+ assert "do cell = loop0_start, loop0_stop" in result
+ assert "call f1_proxy%set_dirty()" in result
+ assert "call f1_proxy%set_clean(3)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -4358,11 +4326,11 @@ def test_rc_all_discontinuous_no_depth(tmpdir):
rc_trans.apply(loop)
result = str(psy.gen)
- assert "IF (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN" in result
- assert "CALL f2_proxy%halo_exchange(depth=max_halo_depth_mesh)" in result
+ assert "if (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) then" in result
+ assert "call f2_proxy%halo_exchange(depth=max_halo_depth_mesh)" in result
assert "loop0_stop = mesh%get_last_halo_cell()" in result
- assert "DO cell = loop0_start, loop0_stop" in result
- assert "CALL f1_proxy%set_clean(max_halo_depth_mesh)" in result
+ assert "do cell = loop0_start, loop0_stop" in result
+ assert "call f1_proxy%set_clean(max_halo_depth_mesh)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -4382,13 +4350,13 @@ def test_rc_all_discontinuous_vector_depth(tmpdir):
result = str(psy.gen)
for idx in range(1, 4):
- assert f"IF (f2_proxy({idx})%is_dirty(depth=3)) THEN" in result
- assert f"CALL f2_proxy({idx})%halo_exchange(depth=3)" in result
+ assert f"if (f2_proxy({idx})%is_dirty(depth=3)) then" in result
+ assert f"call f2_proxy({idx})%halo_exchange(depth=3)" in result
assert "loop0_stop = mesh%get_last_halo_cell(3)" in result
- assert "DO cell = loop0_start, loop0_stop" in result
+ assert "do cell = loop0_start, loop0_stop" in result
for idx in range(1, 4):
- assert f"CALL f1_proxy({idx})%set_dirty()" in result
- assert f"CALL f1_proxy({idx})%set_clean(3)" in result
+ assert f"call f1_proxy({idx})%set_dirty()" in result
+ assert f"call f1_proxy({idx})%set_clean(3)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -4407,14 +4375,14 @@ def test_rc_all_discontinuous_vector_no_depth(tmpdir):
rc_trans.apply(loop)
result = str(psy.gen)
for idx in range(1, 4):
- assert (f"IF (f2_proxy({idx})%is_dirty(depth=max_halo_depth_mesh"
- f")) THEN") in result
- assert (f"CALL f2_proxy({idx})%halo_exchange(depth=max_halo_depth_mesh"
+ assert (f"if (f2_proxy({idx})%is_dirty(depth=max_halo_depth_mesh"
+ f")) then") in result
+ assert (f"call f2_proxy({idx})%halo_exchange(depth=max_halo_depth_mesh"
f")") in result
assert "loop0_stop = mesh%get_last_halo_cell()" in result
- assert "DO cell = loop0_start, loop0_stop" in result
+ assert "do cell = loop0_start, loop0_stop" in result
for idx in range(1, 4):
- assert f"CALL f1_proxy({idx})%set_clean(max_halo_depth_mesh)" in result
+ assert f"call f1_proxy({idx})%set_clean(max_halo_depth_mesh)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -4434,13 +4402,13 @@ def test_rc_all_disc_prev_depend_depth(tmpdir):
loop = schedule[1]
rc_trans.apply(loop, {"depth": 3})
result = str(psy.gen)
- assert "IF (f1_proxy%is_dirty(depth=3)) THEN" not in result
- assert "CALL f1_proxy%halo_exchange(depth=3)" in result
+ assert "if (f1_proxy%is_dirty(depth=3)) then" not in result
+ assert "call f1_proxy%halo_exchange(depth=3)" in result
assert "loop1_stop = mesh%get_last_halo_cell(3)" in result
- assert "DO cell = loop1_start, loop1_stop" in result
- assert "CALL f1_proxy%set_dirty()" in result
- assert "CALL f3_proxy%set_dirty()" in result
- assert "CALL f3_proxy%set_clean(3)" in result
+ assert "do cell = loop1_start, loop1_stop" in result
+ assert "call f1_proxy%set_dirty()" in result
+ assert "call f3_proxy%set_dirty()" in result
+ assert "call f3_proxy%set_clean(3)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -4459,13 +4427,13 @@ def test_rc_all_disc_prev_depend_no_depth():
loop = schedule[1]
rc_trans.apply(loop)
result = str(psy.gen)
- assert "CALL f1_proxy%set_dirty()" in result
- assert ("IF (f1_proxy%is_dirty(depth=max_halo_depth_mesh)) "
- "THEN") not in result
- assert "CALL f1_proxy%halo_exchange(depth=max_halo_depth_mesh)" in result
+ assert "call f1_proxy%set_dirty()" in result
+ assert ("if (f1_proxy%is_dirty(depth=max_halo_depth_mesh)) "
+ "then") not in result
+ assert "call f1_proxy%halo_exchange(depth=max_halo_depth_mesh)" in result
assert "loop1_stop = mesh%get_last_halo_cell()" in result
- assert "DO cell = loop1_start, loop1_stop" in result
- assert "CALL f3_proxy%set_clean(max_halo_depth_mesh)" in result
+ assert "do cell = loop1_start, loop1_stop" in result
+ assert "call f3_proxy%set_clean(max_halo_depth_mesh)" in result
def test_rc_all_disc_prev_dep_depth_vector(tmpdir):
@@ -4484,14 +4452,14 @@ def test_rc_all_disc_prev_dep_depth_vector(tmpdir):
rc_trans.apply(loop, {"depth": 3})
result = str(psy.gen)
for idx in range(1, 4):
- assert f"IF (f1_proxy({idx})%is_dirty(depth=3)) THEN" not in result
- assert f"CALL f1_proxy({idx})%halo_exchange(depth=3)" in result
+ assert f"if (f1_proxy({idx})%is_dirty(depth=3)) then" not in result
+ assert f"call f1_proxy({idx})%halo_exchange(depth=3)" in result
assert "loop1_stop = mesh%get_last_halo_cell(3)" in result
- assert "DO cell = loop1_start, loop1_stop" in result
+ assert "do cell = loop1_start, loop1_stop" in result
for idx in range(1, 4):
- assert f"CALL f1_proxy({idx})%set_dirty()" in result
- assert f"CALL f3_proxy({idx})%set_dirty()" in result
- assert f"CALL f3_proxy({idx})%set_clean(3)" in result
+ assert f"call f1_proxy({idx})%set_dirty()" in result
+ assert f"call f3_proxy({idx})%set_dirty()" in result
+ assert f"call f3_proxy({idx})%set_clean(3)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -4512,13 +4480,13 @@ def test_rc_all_disc_prev_dep_no_depth_vect(tmpdir):
result = str(psy.gen)
assert "is_dirty" not in result
for idx in range(1, 4):
- assert (f"CALL f1_proxy({idx})%halo_exchange(depth=max_halo_depth_"
+ assert (f"call f1_proxy({idx})%halo_exchange(depth=max_halo_depth_"
f"mesh)") in result
assert "loop1_stop = mesh%get_last_halo_cell()" in result
- assert "DO cell = loop1_start, loop1_stop" in result
+ assert "do cell = loop1_start, loop1_stop" in result
for idx in range(1, 4):
- assert f"CALL f1_proxy({idx})%set_dirty()" in result
- assert f"CALL f3_proxy({idx})%set_clean(max_halo_depth_mesh)" in result
+ assert f"call f1_proxy({idx})%set_dirty()" in result
+ assert f"call f3_proxy({idx})%set_clean(max_halo_depth_mesh)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -4539,19 +4507,19 @@ def test_rc_all_disc_prev_dep_no_depth_vect_readwrite(tmpdir):
result = str(psy.gen)
# f3 has readwrite access so need to check the halos
for idx in range(1, 4):
- assert (f"IF (f3_proxy({idx})%is_dirty(depth=max_halo_depth_mesh))"
+ assert (f"if (f3_proxy({idx})%is_dirty(depth=max_halo_depth_mesh))"
in result)
- assert (f"CALL f3_proxy({idx})%halo_exchange(depth=max_halo_depth_mesh"
+ assert (f"call f3_proxy({idx})%halo_exchange(depth=max_halo_depth_mesh"
")" in result)
# f1 has RW to W dependency
for idx in range(1, 4):
- assert (f"CALL f1_proxy({idx})%halo_exchange(depth=max_halo_depth_mesh"
+ assert (f"call f1_proxy({idx})%halo_exchange(depth=max_halo_depth_mesh"
f")" in result)
assert "loop1_stop = mesh%get_last_halo_cell()" in result
- assert "DO cell = loop1_start, loop1_stop" in result
+ assert "do cell = loop1_start, loop1_stop" in result
for idx in range(1, 4):
- assert f"CALL f1_proxy({idx})%set_dirty()" in result
- assert f"CALL f3_proxy({idx})%set_clean(max_halo_depth_mesh)" in result
+ assert f"call f1_proxy({idx})%set_dirty()" in result
+ assert f"call f3_proxy({idx})%set_clean(max_halo_depth_mesh)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -4572,12 +4540,12 @@ def test_rc_dofs_depth():
rc_trans.apply(loop, {"depth": 3})
result = str(psy.gen)
for field in ["f1", "f2"]:
- assert f"IF ({field}_proxy%is_dirty(depth=3)) THEN" in result
- assert f"CALL {field}_proxy%halo_exchange(depth=3)" in result
+ assert f"if ({field}_proxy%is_dirty(depth=3)) then" in result
+ assert f"call {field}_proxy%halo_exchange(depth=3)" in result
assert "loop0_stop = f1_proxy%vspace%get_last_dof_halo(3)" in result
- assert "DO df = loop0_start, loop0_stop" in result
- assert "CALL f1_proxy%set_dirty()" in result
- assert "CALL f1_proxy%set_clean(3)" in result
+ assert "do df = loop0_start, loop0_stop" in result
+ assert "call f1_proxy%set_dirty()" in result
+ assert "call f1_proxy%set_clean(3)" in result
def test_rc_dofs_no_depth():
@@ -4596,12 +4564,12 @@ def test_rc_dofs_no_depth():
rc_trans.apply(loop)
result = str(psy.gen)
- assert "IF (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN" in result
- assert "CALL f2_proxy%halo_exchange(depth=max_halo_depth_mesh)" in result
+ assert "if (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) then" in result
+ assert "call f2_proxy%halo_exchange(depth=max_halo_depth_mesh)" in result
assert "loop0_stop = f1_proxy%vspace%get_last_dof_halo()" in result
- assert "DO df = loop0_start, loop0_stop" in result
- assert "CALL f1_proxy%set_dirty()" not in result
- assert "CALL f1_proxy%set_clean(max_halo_depth_mesh)" in result
+ assert "do df = loop0_start, loop0_stop" in result
+ assert "call f1_proxy%set_dirty()" not in result
+ assert "call f1_proxy%set_clean(max_halo_depth_mesh)" in result
def test_rc_dofs_depth_prev_dep(monkeypatch, annexed, tmpdir):
@@ -4630,12 +4598,12 @@ def test_rc_dofs_depth_prev_dep(monkeypatch, annexed, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
# Check that the f2 halo exchange is modified
- assert "CALL f2_proxy%halo_exchange(depth=3)" in result
+ assert "call f2_proxy%halo_exchange(depth=3)" in result
# There is a need for a run-time is_dirty check for field f2 as
# this field is not modified in this invoke and therefore its halo
# is in an unknown state before it is read
- assert ("IF (f2_proxy%is_dirty(depth=3)) "
- "THEN") in result
+ assert ("if (f2_proxy%is_dirty(depth=3)) "
+ "then") in result
# Check that the existing halo exchanges (for the first un-modified
# loop) remain unchanged. These are on f1, m1 and m2 without annexed
@@ -4644,12 +4612,12 @@ def test_rc_dofs_depth_prev_dep(monkeypatch, annexed, tmpdir):
if annexed:
fld_hex_names.remove("f1")
for field_name in fld_hex_names:
- assert f"IF ({field_name}_proxy%is_dirty(depth=1)) THEN" in result
- assert f"CALL {field_name}_proxy%halo_exchange(depth=1)" in result
+ assert f"if ({field_name}_proxy%is_dirty(depth=1)) then" in result
+ assert f"call {field_name}_proxy%halo_exchange(depth=1)" in result
assert "loop1_stop = f1_proxy%vspace%get_last_dof_halo(3)" in result
- assert "DO df = loop1_start, loop1_stop" in result
- assert "CALL f1_proxy%set_dirty()" in result
- assert "CALL f1_proxy%set_clean(3)" in result
+ assert "do df = loop1_start, loop1_stop" in result
+ assert "call f1_proxy%set_dirty()" in result
+ assert "call f1_proxy%set_clean(3)" in result
def test_rc_dofs_no_depth_prev_dep():
@@ -4669,16 +4637,16 @@ def test_rc_dofs_no_depth_prev_dep():
result = str(psy.gen)
# Check that the f2 halo exchange is modified
- assert "CALL f2_proxy%halo_exchange(depth=max_halo_depth_mesh)" in result
- assert "IF (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN" in result
+ assert "call f2_proxy%halo_exchange(depth=max_halo_depth_mesh)" in result
+ assert "if (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) then" in result
# Check that the existing f1, m1 and m2 halo exchanges remain unchanged
for fname in ["f1", "m1", "m2"]:
- assert f"IF ({fname}_proxy%is_dirty(depth=1)) THEN" in result
- assert f"CALL {fname}_proxy%halo_exchange(depth=1)" in result
+ assert f"if ({fname}_proxy%is_dirty(depth=1)) then" in result
+ assert f"call {fname}_proxy%halo_exchange(depth=1)" in result
assert "loop1_stop = f1_proxy%vspace%get_last_dof_halo()" in result
- assert "DO df = loop1_start, loop1_stop" in result
- assert "CALL f1_proxy%set_dirty()" in result
- assert "CALL f1_proxy%set_clean(max_halo_depth_mesh)" in result
+ assert "do df = loop1_start, loop1_stop" in result
+ assert "call f1_proxy%set_dirty()" in result
+ assert "call f1_proxy%set_clean(max_halo_depth_mesh)" in result
def test_continuous_no_set_clean():
@@ -4689,9 +4657,9 @@ def test_continuous_no_set_clean():
TEST_API, idx=0, dist_mem=True)
result = str(psy.gen)
assert "loop0_stop = mesh%get_last_halo_cell(1)" in result
- assert "DO cell = loop0_start, loop0_stop" in result
- assert "CALL f1_proxy%set_dirty()" in result
- assert "CALL f1_proxy%set_clean(" not in result
+ assert "do cell = loop0_start, loop0_stop" in result
+ assert "call f1_proxy%set_dirty()" in result
+ assert "call f1_proxy%set_clean(" not in result
def test_discontinuous_no_set_clean():
@@ -4702,8 +4670,8 @@ def test_discontinuous_no_set_clean():
idx=0, dist_mem=True)
result = str(psy.gen)
assert "loop0_stop = mesh%get_last_edge_cell()" in result
- assert "CALL m2_proxy%set_dirty()" in result
- assert "CALL m2_proxy%set_clean(" not in result
+ assert "call m2_proxy%set_dirty()" in result
+ assert "call m2_proxy%set_clean(" not in result
def test_dofs_no_set_clean(monkeypatch, annexed):
@@ -4724,8 +4692,8 @@ def test_dofs_no_set_clean(monkeypatch, annexed):
assert "loop0_stop = f1_proxy%vspace%get_last_dof_annexed()" in result
else:
assert "loop0_stop = f1_proxy%vspace%get_last_dof_owned()" in result
- assert "CALL f1_proxy%set_dirty()" in result
- assert "CALL f1_proxy%set_clean(" not in result
+ assert "call f1_proxy%set_dirty()" in result
+ assert "call f1_proxy%set_clean(" not in result
def test_rc_vector_depth(tmpdir):
@@ -4745,13 +4713,13 @@ def test_rc_vector_depth(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert "IF (f2_proxy%is_dirty(depth=3)) THEN" in result
- assert "CALL f2_proxy%halo_exchange(depth=3)" in result
+ assert "if (f2_proxy%is_dirty(depth=3)) then" in result
+ assert "call f2_proxy%halo_exchange(depth=3)" in result
assert "loop0_stop = mesh%get_last_halo_cell(3)" in result
for index in range(1, 4):
- assert f"CALL chi_proxy({index})%set_dirty()" in result
+ assert f"call chi_proxy({index})%set_dirty()" in result
for index in range(1, 4):
- assert f"CALL chi_proxy({index})%set_clean(2)" in result
+ assert f"call chi_proxy({index})%set_clean(2)" in result
def test_rc_vector_no_depth(tmpdir):
@@ -4771,13 +4739,13 @@ def test_rc_vector_no_depth(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert "IF (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN" in result
- assert "CALL f2_proxy%halo_exchange(depth=max_halo_depth_mesh)" in result
+ assert "if (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) then" in result
+ assert "call f2_proxy%halo_exchange(depth=max_halo_depth_mesh)" in result
assert "loop0_stop = mesh%get_last_halo_cell()" in result
for idx in range(1, 4):
- assert f"CALL chi_proxy({idx})%set_dirty()" in result
+ assert f"call chi_proxy({idx})%set_dirty()" in result
for idx in range(1, 4):
- assert (f"CALL chi_proxy({idx})%set_clean(max_halo_depth_mesh - 1)"
+ assert (f"call chi_proxy({idx})%set_clean(max_halo_depth_mesh - 1)"
in result)
@@ -4794,35 +4762,38 @@ def test_rc_no_halo_decrease():
rc_trans = Dynamo0p3RedundantComputationTrans()
# First, change the size of the f2 halo exchange to 3 by performing
# redundant computation in the first loop
- loop = schedule.children[4]
+ loop = schedule.walk(Loop)[0]
rc_trans.apply(loop, {"depth": 3})
result = str(psy.gen)
- assert "IF (f2_proxy%is_dirty(depth=3)) THEN" in result
- assert "IF (m1_proxy%is_dirty(depth=3)) THEN" in result
- assert "IF (m2_proxy%is_dirty(depth=3)) THEN" in result
+ assert "if (f2_proxy%is_dirty(depth=3)) then" in result
+ assert "if (m1_proxy%is_dirty(depth=3)) then" in result
+ assert "if (m2_proxy%is_dirty(depth=3)) then" in result
# Second, try to change the size of the f2 halo exchange to 2 by
# performing redundant computation in the second loop
- loop = schedule.children[5]
+ schedule = invoke.schedule
+ loop = schedule.walk(Loop)[1]
rc_trans.apply(loop, {"depth": 2})
result = str(psy.gen)
- assert "IF (f2_proxy%is_dirty(depth=3)) THEN" in result
- assert "IF (m1_proxy%is_dirty(depth=3)) THEN" in result
- assert "IF (m2_proxy%is_dirty(depth=3)) THEN" in result
+ assert "if (f2_proxy%is_dirty(depth=3)) then" in result
+ assert "if (m1_proxy%is_dirty(depth=3)) then" in result
+ assert "if (m2_proxy%is_dirty(depth=3)) then" in result
# Third, set the size of the f2 halo exchange to the full halo
# depth by performing redundant computation in the second loop
+ schedule = invoke.schedule
+ loop = schedule.walk(Loop)[1]
rc_trans.apply(loop)
result = str(psy.gen)
- assert "IF (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN" in result
- assert "IF (m1_proxy%is_dirty(depth=3)) THEN" in result
- assert "IF (m2_proxy%is_dirty(depth=3)) THEN" in result
+ assert "if (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) then" in result
+ assert "if (m1_proxy%is_dirty(depth=3)) then" in result
+ assert "if (m2_proxy%is_dirty(depth=3)) then" in result
# Fourth, try to change the size of the f2 halo exchange to 4 by
# performing redundant computation in the first loop
- loop = schedule.children[4]
+ loop = schedule.walk(Loop)[0]
rc_trans.apply(loop, {"depth": 4})
result = str(psy.gen)
- assert "IF (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN" in result
- assert "IF (m1_proxy%is_dirty(depth=4)) THEN" in result
- assert "IF (m2_proxy%is_dirty(depth=4)) THEN" in result
+ assert "if (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) then" in result
+ assert "if (m1_proxy%is_dirty(depth=4)) then" in result
+ assert "if (m2_proxy%is_dirty(depth=4)) then" in result
def test_rc_updated_dependence_analysis():
@@ -4898,10 +4869,10 @@ def test_rc_remove_halo_exchange(tmpdir, monkeypatch):
psy, _ = get_invoke("14.7_halo_annexed.f90",
TEST_API, idx=0, dist_mem=True)
result = str(psy.gen)
- assert "CALL f1_proxy%halo_exchange(depth=1)" in result
- assert "CALL f2_proxy%halo_exchange(depth=1)" in result
- assert "IF (m1_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL m1_proxy%halo_exchange(depth=1)" in result
+ assert "call f1_proxy%halo_exchange(depth=1)" in result
+ assert "call f2_proxy%halo_exchange(depth=1)" in result
+ assert "if (m1_proxy%is_dirty(depth=1)) then" in result
+ assert "call m1_proxy%halo_exchange(depth=1)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -4910,21 +4881,21 @@ def test_rc_remove_halo_exchange(tmpdir, monkeypatch):
schedule = invoke.schedule
#
rc_trans = Dynamo0p3RedundantComputationTrans()
- loop = schedule.children[0]
+ loop = schedule.walk(Loop)[0]
rc_trans.apply(loop, {"depth": 1})
result = str(psy.gen)
- assert "CALL f1_proxy%halo_exchange(depth=1)" not in result
- assert "CALL f2_proxy%halo_exchange(depth=1)" in result
- assert "IF (m1_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL m1_proxy%halo_exchange(depth=1)" in result
+ assert "call f1_proxy%halo_exchange(depth=1)" not in result
+ assert "call f2_proxy%halo_exchange(depth=1)" in result
+ assert "if (m1_proxy%is_dirty(depth=1)) then" in result
+ assert "call m1_proxy%halo_exchange(depth=1)" in result
#
- loop = schedule.children[1]
+ loop = schedule.walk(Loop)[1]
rc_trans.apply(loop, {"depth": 1})
result = str(psy.gen)
- assert "CALL f1_proxy%halo_exchange(depth=1)" not in result
- assert "CALL f2_proxy%halo_exchange(depth=1)" not in result
- assert "IF (m1_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL m1_proxy%halo_exchange(depth=1)" in result
+ assert "call f1_proxy%halo_exchange(depth=1)" not in result
+ assert "call f2_proxy%halo_exchange(depth=1)" not in result
+ assert "if (m1_proxy%is_dirty(depth=1)) then" in result
+ assert "call m1_proxy%halo_exchange(depth=1)" in result
def test_rc_max_remove_halo_exchange(tmpdir):
@@ -4945,10 +4916,10 @@ def test_rc_max_remove_halo_exchange(tmpdir):
#
# f3 has "inc" access so there is a check for the halo exchange
# of depth 1
- assert "CALL f3_proxy%halo_exchange(depth=1)" in result
- assert "IF (f3_proxy%is_dirty(depth=1)) THEN" in result
+ assert "call f3_proxy%halo_exchange(depth=1)" in result
+ assert "if (f3_proxy%is_dirty(depth=1)) then" in result
rc_trans = Dynamo0p3RedundantComputationTrans()
- loop = schedule.children[4]
+ loop = schedule.walk(Loop)[0]
rc_trans.apply(loop)
result = str(psy.gen)
@@ -4957,12 +4928,12 @@ def test_rc_max_remove_halo_exchange(tmpdir):
# and therefore the outermost halo stays dirty. We can not be
# certain whether the halo exchange is required or not as we don't
# know the depth of the halo.
- assert "CALL f3_proxy%halo_exchange(depth=1)" in result
+ assert "call f3_proxy%halo_exchange(depth=1)" in result
# We do not know whether we need the halo exchange so we include an if
- assert "IF (f3_proxy%is_dirty(depth=1)) THEN" in result
+ assert "if (f3_proxy%is_dirty(depth=1)) then" in result
#
- assert "CALL f4_proxy%halo_exchange(depth=1)" in result
- loop = schedule.children[5]
+ assert "call f4_proxy%halo_exchange(depth=1)" in result
+ loop = schedule.walk(Loop)[-1]
rc_trans.apply(loop)
result = str(psy.gen)
# f4 halo exchange is removed as it is redundantly computed to the
@@ -4970,12 +4941,12 @@ def test_rc_max_remove_halo_exchange(tmpdir):
# are clean. However, we introduce a new halo exchange for
# f5. This could be removed by redundant computation but we don't
# bother as that is not relevant to this test.
- assert "CALL f4_proxy%halo_exchange(depth=1)" not in result
+ assert "call f4_proxy%halo_exchange(depth=1)" not in result
assert LFRicBuild(tmpdir).code_compiles(psy)
-def test_rc_continuous_halo_remove():
+def test_rc_continuous_halo_remove(fortran_writer):
''' Check that we do not remove a halo exchange when the field is
continuous and the redundant computation depth equals the required
halo access depth. The reason for this is that the outer halo
@@ -4987,7 +4958,7 @@ def test_rc_continuous_halo_remove():
psy, invoke = get_invoke("15.1.2_builtin_and_normal_kernel_invoke.f90",
TEST_API, idx=0, dist_mem=True)
schedule = invoke.schedule
- result = str(psy.gen)
+ result = fortran_writer(schedule)
rc_trans = Dynamo0p3RedundantComputationTrans()
f3_inc_hex = schedule.children[2]
f3_inc_loop = schedule.children[4]
@@ -4998,8 +4969,8 @@ def test_rc_continuous_halo_remove():
# exchanges is placed before the f3_inc_loop and one is placed
# before the f3_read_loop (there are three other halo exchanges,
# one each for fields f1, f2 and f4).
- assert result.count("CALL f3_proxy%halo_exchange(depth=1") == 2
- assert result.count("IF (f3_proxy%is_dirty(depth=1)) THEN") == 1
+ assert result.count("call f3_proxy%halo_exchange(depth=1") == 2
+ assert result.count("if (f3_proxy%is_dirty(depth=1)) then") == 1
#
# Applying redundant computation to equal depth on f3_inc_loop and
# f3_read_loop does not remove the initial number of halo exchanges.
@@ -5007,27 +4978,26 @@ def test_rc_continuous_halo_remove():
# f3_inc_loop are now to depth 2.
rc_trans.apply(f3_read_loop, {"depth": 3})
rc_trans.apply(f3_inc_loop, {"depth": 3})
- result = str(psy.gen)
- assert result.count("CALL f3_proxy%halo_exchange(depth=") == 2
+ result = fortran_writer(schedule)
+ assert result.count("call f3_proxy%halo_exchange(depth=") == 2
assert f3_inc_hex._compute_halo_depth().value == "2"
assert f3_read_hex._compute_halo_depth().value == "3"
- assert "IF (f3_proxy%is_dirty(depth=2)) THEN" in result
- assert "IF (f3_proxy%is_dirty(depth=3)) THEN" not in result
- #
+ assert "if (f3_proxy%is_dirty(depth=2)) then" in result
+ assert "if (f3_proxy%is_dirty(depth=3)) then" not in result
# Applying redundant computation to one more depth to f3_inc_loop
# removes the halo exchange before the f3_read_loop.
# The "is_dirty" check and the halo exchange before the
# f3_inc_loop are now to depth 3.
rc_trans.apply(f3_inc_loop, {"depth": 4})
- result = str(psy.gen)
- assert result.count("CALL f3_proxy%halo_exchange(depth=") == 1
+ result = fortran_writer(schedule)
+ assert result.count("call f3_proxy%halo_exchange(depth=") == 1
assert f3_inc_hex._compute_halo_depth().value == "3"
# Position 7 is now halo exchange on f4 instead of f3
assert schedule.children[7].field != "f3"
- assert "IF (f3_proxy%is_dirty(depth=4)" not in result
+ assert "if (f3_proxy%is_dirty(depth=4)" not in result
-def test_rc_discontinuous_halo_remove(monkeypatch):
+def test_rc_discontinuous_halo_remove(monkeypatch, fortran_writer):
''' Check that we do remove a halo exchange when the field is
discontinuous and the redundant computation depth equals the
required halo access depth. Also check that we do not remove the
@@ -5038,23 +5008,23 @@ def test_rc_discontinuous_halo_remove(monkeypatch):
psy, invoke = get_invoke("15.1.2_builtin_and_normal_kernel_invoke.f90",
TEST_API, idx=0, dist_mem=True)
schedule = invoke.schedule
- result = str(psy.gen)
+ result = fortran_writer(schedule)
rc_trans = Dynamo0p3RedundantComputationTrans()
f4_write_loop = schedule.children[5]
f4_read_loop = schedule.children[9]
- assert "CALL f4_proxy%halo_exchange(depth=1)" in result
- assert "IF (f4_proxy%is_dirty(depth=1)) THEN" not in result
+ assert "call f4_proxy%halo_exchange(depth=1)" in result
+ assert "if (f4_proxy%is_dirty(depth=1)) then" not in result
rc_trans.apply(f4_read_loop, {"depth": 3})
rc_trans.apply(f4_write_loop, {"depth": 2})
- result = str(psy.gen)
- assert "CALL f4_proxy%halo_exchange(depth=3)" in result
- assert "IF (f4_proxy%is_dirty(depth=3)) THEN" not in result
+ result = fortran_writer(schedule)
+ assert "call f4_proxy%halo_exchange(depth=3)" in result
+ assert "if (f4_proxy%is_dirty(depth=3)) then" not in result
# Increase RC depth to 3 and check that halo exchange is removed
# when a discontinuous field has write access
rc_trans.apply(f4_write_loop, {"depth": 3})
- result = str(psy.gen)
- assert "CALL f4_proxy%halo_exchange(depth=" not in result
- assert "IF (f4_proxy%is_dirty(depth=" not in result
+ result = fortran_writer(schedule)
+ assert "call f4_proxy%halo_exchange(depth=" not in result
+ assert "if (f4_proxy%is_dirty(depth=" not in result
# Increase RC depth to 3 and check that halo exchange is not removed
# when a discontinuous field has readwrite access
call = f4_write_loop.loop_body[0]
@@ -5062,12 +5032,12 @@ def test_rc_discontinuous_halo_remove(monkeypatch):
monkeypatch.setattr(f4_arg, "_access", value=AccessType.READWRITE)
monkeypatch.setattr(f4_write_loop, "_upper_bound_halo_depth", value=2)
rc_trans.apply(f4_write_loop, {"depth": 3})
- result = str(psy.gen)
- assert "CALL f4_proxy%halo_exchange(depth=" in result
- assert "IF (f4_proxy%is_dirty(depth=" in result
+ result = fortran_writer(schedule)
+ assert "call f4_proxy%halo_exchange(depth=" in result
+ assert "if (f4_proxy%is_dirty(depth=" in result
-def test_rc_reader_halo_remove():
+def test_rc_reader_halo_remove(fortran_writer):
''' Check that we do not add an unnecessary halo exchange when we
increase the depth of halo that a loop computes but the previous loop
still computes deep enough into the halo to avoid needing a halo
@@ -5077,27 +5047,25 @@ def test_rc_reader_halo_remove():
psy, invoke = get_invoke("15.1.2_builtin_and_normal_kernel_invoke.f90",
TEST_API, idx=0, dist_mem=True)
schedule = invoke.schedule
- result = str(psy.gen)
-
- result = str(psy.gen)
- assert "CALL f2_proxy%halo_exchange(depth=1)" in result
+ result = fortran_writer(schedule)
+ assert "call f2_proxy%halo_exchange(depth=1)" in result
rc_trans = Dynamo0p3RedundantComputationTrans()
# Redundant computation to avoid halo exchange for f2
rc_trans.apply(schedule.children[1], {"depth": 2})
- result = str(psy.gen)
- assert "CALL f2_proxy%halo_exchange(" not in result
+ result = fortran_writer(schedule)
+ assert "call f2_proxy%halo_exchange(" not in result
# Redundant computation to depth 2 in f2 reader loop should not
# cause a new halo exchange as it is still covered by depth=2 in
# the writer loop
rc_trans.apply(schedule.children[4], {"depth": 2})
- result = str(psy.gen)
- assert "CALL f2_proxy%halo_exchange(" not in result
+ result = fortran_writer(schedule)
+ assert "call f2_proxy%halo_exchange(" not in result
-def test_rc_vector_reader_halo_remove():
+def test_rc_vector_reader_halo_remove(fortran_writer):
''' Check that we do not add unnecessary halo exchanges for a vector
field when we increase the depth of halo that a loop computes but
the previous loop still computes deep enough into the halo to
@@ -5105,7 +5073,7 @@ def test_rc_vector_reader_halo_remove():
psy, invoke = get_invoke("8.2.1_multikernel_invokes_w3_vector.f90",
TEST_API, idx=0, dist_mem=True)
schedule = invoke.schedule
- result = str(psy.gen)
+ result = fortran_writer(schedule)
assert "is_dirty" not in result
assert "halo_exchange" not in result
@@ -5114,7 +5082,7 @@ def test_rc_vector_reader_halo_remove():
# Redundant computation for first loop
rc_trans.apply(schedule.children[0], {"depth": 1})
- result = str(psy.gen)
+ result = fortran_writer(schedule)
assert result.count("is_dirty") == 3
assert result.count("halo_exchange") == 3
@@ -5122,19 +5090,19 @@ def test_rc_vector_reader_halo_remove():
# cause a new halo exchange as it is still covered by depth=1 in
# the writer loop
rc_trans.apply(schedule.children[4], {"depth": 1})
- result = str(psy.gen)
+ result = fortran_writer(schedule)
assert result.count("is_dirty") == 3
assert result.count("halo_exchange") == 3
-def test_rc_vector_reader_halo_readwrite():
+def test_rc_vector_reader_halo_readwrite(fortran_writer):
''' When we increase the depth of halo that a loop computes but the
previous loop still computes deep enough into the halo the added
halo exchanges stem from the vector readwrite access. '''
psy, invoke = get_invoke("8.2.2_multikernel_invokes_wtheta_vector.f90",
TEST_API, idx=0, dist_mem=True)
schedule = invoke.schedule
- result = str(psy.gen)
+ result = fortran_writer(schedule)
assert "is_dirty" not in result
assert "halo_exchange" not in result
@@ -5144,14 +5112,14 @@ def test_rc_vector_reader_halo_readwrite():
# Redundant computation for first loop: both fields have
# read dependencies for all three components
rc_trans.apply(schedule.children[0], {"depth": 1})
- result = str(psy.gen)
+ result = fortran_writer(schedule)
assert result.count("is_dirty") == 6
assert result.count("halo_exchange") == 6
# Redundant computation in reader loop causes new halo exchanges
# due to readwrite dependency in f3
rc_trans.apply(schedule.children[7], {"depth": 1})
- result = str(psy.gen)
+ result = fortran_writer(schedule)
assert result.count("is_dirty") == 9
assert result.count("halo_exchange") == 9
@@ -5159,7 +5127,7 @@ def test_rc_vector_reader_halo_readwrite():
# additional halo exchanges (3 more due to readwrite to read
# dependency in f1)
rc_trans.apply(schedule.children[10], {"depth": 2})
- result = str(psy.gen)
+ result = fortran_writer(schedule)
# Check for additional halo exchanges
assert result.count("halo_exchange") == 12
# Check that additional halo exchanges for all three f1
@@ -5168,11 +5136,11 @@ def test_rc_vector_reader_halo_readwrite():
for idvct in range(1, 4):
idx = str(idvct)
assert (
- "CALL f1_proxy(" + idx + ")%halo_exchange(depth=2)") in result
+ "call f1_proxy(" + idx + ")%halo_exchange(depth=2)") in result
assert (
- " IF (f1_proxy(" + idx + ")%is_dirty(depth=2)) THEN\n"
- " CALL f1_proxy(" + idx + ")%halo_exchange(depth=2)\n"
- " END IF\n") not in result
+ " if (f1_proxy(" + idx + ")%is_dirty(depth=2)) then\n"
+ " call f1_proxy(" + idx + ")%halo_exchange(depth=2)\n"
+ " end if\n") not in result
def test_stencil_rc_max_depth_1(monkeypatch):
@@ -5632,22 +5600,22 @@ def test_rc_colour(tmpdir):
result = str(psy.gen)
assert (
- " IF (f2_proxy%is_dirty(depth=2)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=2)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=2)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=2)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=2)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=2)\n"
- " END IF\n" in result)
- assert " cmap => mesh%get_colour_map()\n" in result
+ " if (f2_proxy%is_dirty(depth=2)) then\n"
+ " call f2_proxy%halo_exchange(depth=2)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=2)) then\n"
+ " call m1_proxy%halo_exchange(depth=2)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=2)) then\n"
+ " call m2_proxy%halo_exchange(depth=2)\n"
+ " end if\n" in result)
+ assert " cmap => mesh%get_colour_map()\n" in result
assert "loop0_stop = ncolour" in result
assert ("last_halo_cell_all_colours = "
"mesh%get_last_halo_cell_all_colours()" in result)
assert (
- " DO colour = loop0_start, loop0_stop, 1\n"
- " DO cell = loop1_start, last_halo_cell_all_colours(colour,2)"
+ " do colour = loop0_start, loop0_stop, 1\n"
+ " do cell = loop1_start, last_halo_cell_all_colours(colour,2), 1"
in result)
# We've requested redundant computation out to the level 2 halo
@@ -5655,8 +5623,8 @@ def test_rc_colour(tmpdir):
# dirty. This means that all of the halo is dirty apart from level
# 1.
assert (
- " CALL f1_proxy%set_dirty()\n"
- " CALL f1_proxy%set_clean(1)" in result)
+ " call f1_proxy%set_dirty()\n"
+ " call f1_proxy%set_clean(1)" in result)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -5681,28 +5649,28 @@ def test_rc_max_colour(tmpdir):
result = str(psy.gen)
assert (
- " IF (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
- " END IF\n" in result)
- assert " cmap => mesh%get_colour_map()\n" in result
+ " if (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) then\n"
+ " call f2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=max_halo_depth_mesh)) then\n"
+ " call m1_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=max_halo_depth_mesh)) then\n"
+ " call m2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
+ " end if\n" in result)
+ assert " cmap => mesh%get_colour_map()\n" in result
assert "loop0_stop = ncolour" in result
assert ("last_halo_cell_all_colours = "
"mesh%get_last_halo_cell_all_colours()" in result)
assert (
- " DO colour = loop0_start, loop0_stop, 1\n"
- " DO cell = loop1_start, last_halo_cell_all_colours(colour,"
+ " do colour = loop0_start, loop0_stop, 1\n"
+ " do cell = loop1_start, last_halo_cell_all_colours(colour,"
"max_halo_depth_mesh), 1\n"
in result)
assert (
- " CALL f1_proxy%set_dirty()\n"
- " CALL f1_proxy%set_clean(max_halo_depth_mesh - 1)" in result)
+ " call f1_proxy%set_dirty()\n"
+ " call f1_proxy%set_clean(max_halo_depth_mesh - 1)" in result)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -5754,32 +5722,32 @@ def test_rc_then_colour(tmpdir):
result = str(psy.gen)
assert (
- " IF (f2_proxy%is_dirty(depth=3)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=3)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=3)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=3)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=3)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=3)\n"
- " END IF\n" in result)
- assert " cmap => mesh%get_colour_map()\n" in result
+ " if (f2_proxy%is_dirty(depth=3)) then\n"
+ " call f2_proxy%halo_exchange(depth=3)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=3)) then\n"
+ " call m1_proxy%halo_exchange(depth=3)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=3)) then\n"
+ " call m2_proxy%halo_exchange(depth=3)\n"
+ " end if\n" in result)
+ assert " cmap => mesh%get_colour_map()\n" in result
assert "loop0_stop = ncolour" in result
assert ("last_halo_cell_all_colours = "
"mesh%get_last_halo_cell_all_colours()" in result)
assert (
- " DO colour = loop0_start, loop0_stop, 1\n"
- " DO cell = loop1_start, last_halo_cell_all_colours(colour,3),"
+ " do colour = loop0_start, loop0_stop, 1\n"
+ " do cell = loop1_start, last_halo_cell_all_colours(colour,3),"
" 1\n"
- " CALL testkern_code(nlayers_f1, a, f1_data,"
+ " call testkern_code(nlayers_f1, a, f1_data,"
" f2_data, m1_data, m2_data, ndf_w1, undf_w1, "
"map_w1(:,cmap(colour,cell)), ndf_w2, undf_w2, "
"map_w2(:,cmap(colour,cell)), ndf_w3, undf_w3, "
"map_w3(:,cmap(colour,cell)))\n" in result)
assert (
- " CALL f1_proxy%set_dirty()\n"
- " CALL f1_proxy%set_clean(2)" in result)
+ " call f1_proxy%set_dirty()\n"
+ " call f1_proxy%set_clean(2)" in result)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -5810,27 +5778,27 @@ def test_rc_then_colour2(tmpdir):
result = str(psy.gen)
assert (
- " IF (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
- " END IF\n" in result)
- assert " cmap => mesh%get_colour_map()\n" in result
+ " if (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) then\n"
+ " call f2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=max_halo_depth_mesh)) then\n"
+ " call m1_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=max_halo_depth_mesh)) then\n"
+ " call m2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
+ " end if\n" in result)
+ assert " cmap => mesh%get_colour_map()\n" in result
assert "loop0_stop = ncolour" in result
assert ("last_halo_cell_all_colours = mesh%"
"get_last_halo_cell_all_colours()" in result)
assert (
- " DO colour = loop0_start, loop0_stop, 1\n"
- " DO cell = loop1_start, last_halo_cell_all_colours(colour,"
+ " do colour = loop0_start, loop0_stop, 1\n"
+ " do cell = loop1_start, last_halo_cell_all_colours(colour,"
"max_halo_depth_mesh), 1\n" in result)
assert (
- " CALL f1_proxy%set_dirty()\n"
- " CALL f1_proxy%set_clean(max_halo_depth_mesh - 1)" in result)
+ " call f1_proxy%set_dirty()\n"
+ " call f1_proxy%set_clean(max_halo_depth_mesh - 1)" in result)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -5865,29 +5833,29 @@ def test_loop_fuse_then_rc(tmpdir):
assert "max_halo_depth_mesh = mesh%get_halo_depth()" in result
assert (
- " IF (f1_proxy%is_dirty(depth=max_halo_depth_mesh - 1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=max_halo_depth_mesh - 1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=max_halo_depth_mesh)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
- " END IF\n" in result)
- assert " cmap => mesh%get_colour_map()\n" in result
+ " if (f1_proxy%is_dirty(depth=max_halo_depth_mesh - 1)) then\n"
+ " call f1_proxy%halo_exchange(depth=max_halo_depth_mesh - 1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=max_halo_depth_mesh)) then\n"
+ " call f2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=max_halo_depth_mesh)) then\n"
+ " call m1_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=max_halo_depth_mesh)) then\n"
+ " call m2_proxy%halo_exchange(depth=max_halo_depth_mesh)\n"
+ " end if\n" in result)
+ assert " cmap => mesh%get_colour_map()\n" in result
assert "loop0_stop = ncolour" in result
assert ("last_halo_cell_all_colours = mesh%"
"get_last_halo_cell_all_colours()" in result)
assert (
- " DO colour = loop0_start, loop0_stop, 1\n"
- " DO cell = loop1_start, last_halo_cell_all_colours(colour,"
+ " do colour = loop0_start, loop0_stop, 1\n"
+ " do cell = loop1_start, last_halo_cell_all_colours(colour,"
"max_halo_depth_mesh), 1\n" in result)
assert (
- " CALL f1_proxy%set_dirty()\n"
- " CALL f1_proxy%set_clean(max_halo_depth_mesh - 1)" in result)
+ " call f1_proxy%set_dirty()\n"
+ " call f1_proxy%set_clean(max_halo_depth_mesh - 1)" in result)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -6285,7 +6253,6 @@ def test_haloex_rc4_colouring(tmpdir, monkeypatch, annexed):
psy, invoke = get_invoke("14.10_halo_continuous_cell_w_to_r.f90",
TEST_API, idx=0)
schedule = invoke.schedule
- result = str(psy.gen)
if annexed:
index = 1
@@ -6339,33 +6306,33 @@ def test_intergrid_colour(dist_mem, trans_class, tmpdir):
loop_trans.apply(loop)
gen = str(psy.gen).lower()
expected = '''\
- ncolour_fld_m = mesh_fld_m%get_ncolours()
- cmap_fld_m => mesh_fld_m%get_colour_map()'''
+ ncolour_fld_m = mesh_fld_m%get_ncolours()
+ cmap_fld_m => mesh_fld_m%get_colour_map()'''
assert expected in gen
expected = '''\
- ncolour_cmap_fld_c = mesh_cmap_fld_c%get_ncolours()
- cmap_cmap_fld_c => mesh_cmap_fld_c%get_colour_map()'''
+ ncolour_cmap_fld_c = mesh_cmap_fld_c%get_ncolours()
+ cmap_cmap_fld_c => mesh_cmap_fld_c%get_colour_map()'''
assert expected in gen
assert "loop1_stop = ncolour_fld_m" in gen
assert "loop2_stop" not in gen
- assert " do colour = loop1_start, loop1_stop, 1\n" in gen
+ assert " do colour = loop1_start, loop1_stop, 1\n" in gen
if trans_class[2]:
assert f"!${trans_class[2]} " in gen
if dist_mem:
assert ("last_halo_cell_all_colours_fld_m = "
"mesh_fld_m%get_last_halo_cell_all_colours()" in gen)
expected = (
- " do cell = loop2_start, last_halo_cell_all_colours_fld_m"
+ " do cell = loop2_start, last_halo_cell_all_colours_fld_m"
"(colour,1), 1\n")
else:
assert ("last_edge_cell_all_colours_fld_m = "
"mesh_fld_m%get_last_edge_cell_all_colours()" in gen)
expected = (
- " do cell = loop2_start, last_edge_cell_all_colours_fld_m"
+ " do cell = loop2_start, last_edge_cell_all_colours_fld_m"
"(colour), 1\n")
assert expected in gen
expected = (
- " call prolong_test_kernel_code(nlayers_fld_f, cell_map_fld_m"
+ " call prolong_test_kernel_code(nlayers_fld_f, cell_map_fld_m"
"(:,:,cmap_fld_m(colour,cell)), ncpc_fld_f_fld_m_x, "
"ncpc_fld_f_fld_m_y, ncell_fld_f, fld_f_data, fld_m_data, "
"ndf_w1, undf_w1, map_w1, undf_w2, "
@@ -6375,35 +6342,6 @@ def test_intergrid_colour(dist_mem, trans_class, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
-def test_intergrid_colour_errors(dist_mem, monkeypatch):
- ''' Check that we raise the expected error when colouring is not applied
- correctly to inter-grid kernels within a loop over colours. '''
- ctrans = Dynamo0p3ColourTrans()
- # Use an example that contains both prolongation and restriction kernels
- psy, invoke = get_invoke("22.2_intergrid_3levels.f90",
- TEST_API, idx=0, dist_mem=dist_mem)
- schedule = invoke.schedule
- # First two kernels are prolongation, last two are restriction
- loops = schedule.walk(Loop)
- loop = loops[1]
- # To a prolong kernel
- ctrans.apply(loop)
- # Update our list of loops
- loops = schedule.walk(Loop)
- # Check that the upper bound is correct
- upperbound = loops[1]._upper_bound_fortran()
- assert upperbound == "ncolour_fld_m"
- # Manually add an un-coloured kernel to the loop that we coloured
- loop = loops[2]
- kern = loops[3].loop_body[0].detach()
- monkeypatch.setattr(kern, "is_coloured", lambda: True)
- loop.loop_body.children.append(kern)
- with pytest.raises(InternalError) as err:
- _ = loops[1]._upper_bound_fortran()
- assert ("All kernels within a loop over colours must have been coloured "
- "but kernel 'restrict_test_kernel_code' has not" in str(err.value))
-
-
def test_intergrid_omp_parado(dist_mem, tmpdir):
'''Check that we can add an OpenMP parallel loop to a loop containing
an inter-grid kernel call.
@@ -6428,19 +6366,19 @@ def test_intergrid_omp_parado(dist_mem, tmpdir):
otrans.apply(loops[5])
gen = str(psy.gen)
assert "loop4_stop = ncolour_cmap_fld_c" in gen
- assert (" DO colour = loop4_start, loop4_stop, 1\n"
- " !$omp parallel do default(shared), private(cell), "
+ assert (" do colour = loop4_start, loop4_stop, 1\n"
+ " !$omp parallel do default(shared), private(cell), "
"schedule(static)\n" in gen)
if dist_mem:
assert ("last_halo_cell_all_colours_cmap_fld_c = "
"mesh_cmap_fld_c%get_last_halo_cell_all_colours()" in gen)
- assert ("DO cell = loop5_start, last_halo_cell_all_colours_cmap_fld_c"
+ assert ("do cell = loop5_start, last_halo_cell_all_colours_cmap_fld_c"
"(colour,1), 1\n" in gen)
else:
assert ("last_edge_cell_all_colours_cmap_fld_c = mesh_cmap_fld_c%"
"get_last_edge_cell_all_colours()" in gen)
- assert ("DO cell = loop5_start, last_edge_cell_all_colours_cmap_fld_c"
+ assert ("do cell = loop5_start, last_edge_cell_all_colours_cmap_fld_c"
"(colour), 1\n" in gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -6475,19 +6413,19 @@ def test_intergrid_omp_para_region1(dist_mem, tmpdir):
"get_last_edge_cell_all_colours()\n" in gen)
upper_bound = "last_edge_cell_all_colours_cmap_fld_c(colour)"
assert "loop0_stop = ncolour_cmap_fld_c\n" in gen
- assert (f" DO colour = loop0_start, loop0_stop, 1\n"
- f" !$omp parallel default(shared), private(cell)\n"
- f" !$omp do schedule(static)\n"
- f" DO cell = loop1_start, {upper_bound}, 1\n"
- f" CALL prolong_test_kernel_code(nlayers_fld_m, "
+ assert (f" do colour = loop0_start, loop0_stop, 1\n"
+ f" !$omp parallel default(shared), private(cell)\n"
+ f" !$omp do schedule(static)\n"
+ f" do cell = loop1_start, {upper_bound}, 1\n"
+ f" call prolong_test_kernel_code(nlayers_fld_m, "
f"cell_map_cmap_fld_c(:,:,cmap_cmap_fld_c(colour,cell)), "
f"ncpc_fld_m_cmap_fld_c_x, ncpc_fld_m_cmap_fld_c_y, ncell_fld_m, "
f"fld_m_data, cmap_fld_c_data, ndf_w1, undf_w1, "
f"map_w1, undf_w2, map_w2(:,cmap_cmap_fld_c(colour,cell)))\n"
- f" END DO\n"
- f" !$omp end do\n"
- f" !$omp end parallel\n"
- f" END DO\n" in gen)
+ f" enddo\n"
+ f" !$omp end do\n"
+ f" !$omp end parallel\n"
+ f" enddo\n" in gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -6584,13 +6522,13 @@ def test_accenterdata_builtin(tmpdir):
"map_w1,map_w2,map_w3,ndf_w1,ndf_w2,ndf_w3,nlayers_f1,"
"undf_w1,undf_w2,undf_w3)" in output)
assert "loop2_stop = undf_aspc1_f1" in output
- assert (" !$acc loop independent\n"
- " do df = loop2_start, loop2_stop, 1\n"
- " ! built-in: setval_c (set a real-valued field to "
+ assert (" !$acc loop independent\n"
+ " do df = loop2_start, loop2_stop, 1\n"
+ " ! built-in: setval_c (set a real-valued field to "
"a real scalar value)\n"
- " f1_data(df) = 0.0_r_def\n"
- " end do\n"
- " !$acc end parallel\n" in output)
+ " f1_data(df) = 0.0_r_def\n"
+ " enddo\n"
+ " !$acc end parallel\n" in output)
# Class ACCEnterDataTrans end
@@ -6610,11 +6548,11 @@ def test_acckernelstrans():
code = str(psy.gen)
assert "loop0_stop = f1_proxy%vspace%get_ncell()" in code
assert (
- " !$acc kernels\n"
- " DO cell = loop0_start, loop0_stop, 1\n" in code)
+ " !$acc kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n" in code)
assert (
- " END DO\n"
- " !$acc end kernels\n" in code)
+ " enddo\n"
+ " !$acc end kernels\n" in code)
def test_acckernelstrans_dm():
@@ -6636,16 +6574,15 @@ def test_acckernelstrans_dm():
code = str(psy.gen)
assert "loop0_stop = mesh%get_last_halo_cell(1)" in code
assert (
- " !$acc kernels\n"
- " DO cell = loop0_start, loop0_stop, 1\n" in code)
+ " !$acc kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n" in code)
assert (
- " END DO\n"
- " !$acc end kernels\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the above "
+ " enddo\n"
+ " !$acc end kernels\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the above "
"loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n" in code)
+ " call f1_proxy%set_dirty()\n" in code)
# Class ACCKernelsTrans end
@@ -6669,15 +6606,14 @@ def test_accparalleltrans(tmpdir):
code = str(psy.gen)
assert "loop0_stop = f1_proxy%vspace%get_ncell()" in code
assert (
- " !$acc enter data copyin(f1_data,f2_data,m1_data,"
+ " !$acc enter data copyin(f1_data,f2_data,m1_data,"
"m2_data,map_w1,map_w2,map_w3,ndf_w1,ndf_w2,ndf_w3,nlayers_f1,"
"undf_w1,undf_w2,undf_w3)\n"
- " !\n"
- " !$acc parallel default(present)\n"
- " DO cell = loop0_start, loop0_stop, 1") in code
+ " !$acc parallel default(present)\n"
+ " do cell = loop0_start, loop0_stop, 1") in code
assert (
- " END DO\n"
- " !$acc end parallel\n") in code
+ " enddo\n"
+ " !$acc end parallel\n") in code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -6704,19 +6640,18 @@ def test_accparalleltrans_dm(tmpdir):
acc_enter_trans.apply(sched)
code = str(psy.gen)
- assert (" !$acc parallel default(present)\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_code(nlayers_f1, a, f1_data, "
+ assert (" !$acc parallel default(present)\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_code(nlayers_f1, a, f1_data, "
"f2_data, m1_data, m2_data, ndf_w1, undf_w1, "
"map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, "
"undf_w3, map_w3(:,cell))\n"
- " END DO\n"
- " !$acc end parallel\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the above "
+ " enddo\n"
+ " !$acc end parallel\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the above "
"loop(s)\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n" in code)
+ " call f1_proxy%set_dirty()\n" in code)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -6744,18 +6679,17 @@ def test_acclooptrans():
code = str(psy.gen)
assert "loop0_stop = ncolour" in code
assert (
- " !$acc enter data copyin(f1_data,f2_data,m1_data,"
+ " !$acc enter data copyin(f1_data,f2_data,m1_data,"
"m2_data,map_w1,map_w2,map_w3,ndf_w1,ndf_w2,ndf_w3,nlayers_f1,"
"undf_w1,undf_w2,undf_w3)\n"
- " !\n"
- " !$acc parallel default(present)\n"
- " DO colour = loop0_start, loop0_stop, 1\n"
- " !$acc loop independent\n"
- " DO cell = loop1_start, last_edge_cell_all_colours(colour), 1"
+ " !$acc parallel default(present)\n"
+ " do colour = loop0_start, loop0_stop, 1\n"
+ " !$acc loop independent\n"
+ " do cell = loop1_start, last_edge_cell_all_colours(colour), 1"
in code)
assert (
- " END DO\n"
- " !$acc end parallel\n") in code
+ " enddo\n"
+ " !$acc end parallel\n") in code
# Class ACCLoopTrans end
@@ -6801,17 +6735,15 @@ def test_async_hex(tmpdir):
ahex_trans.apply(f2_hex)
result = str(psy.gen)
assert (
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange_start(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange_finish(depth=1)\n"
- " END IF\n") in result
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange_start(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange_finish(depth=1)\n"
+ " end if\n") in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -6836,18 +6768,18 @@ def test_async_hex_move_1(tmpdir):
mtrans.apply(schedule.children[4], schedule.children[3])
result = str(psy.gen)
assert (
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange_start(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange_finish(depth=1)\n"
- " END IF\n") in result
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange_start(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=1)) then\n"
+ " call m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange_finish(depth=1)\n"
+ " end if\n") in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -6934,13 +6866,13 @@ def test_async_hex_move_2(tmpdir, monkeypatch):
result = str(psy.gen)
assert "loop3_stop = mesh%get_last_halo_cell(1)" in result
assert (
- " CALL f2_proxy%halo_exchange_start(depth=1)\n"
- " DO cell = loop3_start, loop3_stop, 1\n"
- " CALL testkern_any_space_3_code(cell, nlayers_op, "
+ " call f2_proxy%halo_exchange_start(depth=1)\n"
+ " do cell = loop3_start, loop3_stop, 1\n"
+ " call testkern_any_space_3_code(cell, nlayers_op, "
"op_proxy%ncell_3d, op_local_stencil, ndf_aspc1_op, "
"ndf_aspc2_op)\n"
- " END DO\n"
- " CALL f2_proxy%halo_exchange_finish(depth=1)\n") in result
+ " enddo\n"
+ " call f2_proxy%halo_exchange_finish(depth=1)\n") in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -7021,33 +6953,33 @@ def test_rc_remove_async_halo_exchange(monkeypatch, tmpdir):
ahex_trans.apply(f1_hex)
result = str(psy.gen)
- assert "CALL f1_proxy%halo_exchange_start(depth=1)" in result
- assert "CALL f1_proxy%halo_exchange_finish(depth=1)" in result
- assert "CALL f2_proxy%halo_exchange_start(depth=1)" in result
- assert "CALL f2_proxy%halo_exchange_finish(depth=1)" in result
- assert "IF (m1_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL m1_proxy%halo_exchange(depth=1)" in result
+ assert "call f1_proxy%halo_exchange_start(depth=1)" in result
+ assert "call f1_proxy%halo_exchange_finish(depth=1)" in result
+ assert "call f2_proxy%halo_exchange_start(depth=1)" in result
+ assert "call f2_proxy%halo_exchange_finish(depth=1)" in result
+ assert "if (m1_proxy%is_dirty(depth=1)) then" in result
+ assert "call m1_proxy%halo_exchange(depth=1)" in result
rc_trans = Dynamo0p3RedundantComputationTrans()
- loop = schedule.children[0]
+ loop = schedule.walk(Loop)[0]
rc_trans.apply(loop, {"depth": 1})
result = str(psy.gen)
- assert "CALL f1_proxy%halo_exchange_start(depth=1)" not in result
- assert "CALL f1_proxy%halo_exchange_finish(depth=1)" not in result
- assert "CALL f2_proxy%halo_exchange_start(depth=1)" in result
- assert "CALL f2_proxy%halo_exchange_finish(depth=1)" in result
- assert "IF (m1_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL m1_proxy%halo_exchange(depth=1)" in result
+ assert "call f1_proxy%halo_exchange_start(depth=1)" not in result
+ assert "call f1_proxy%halo_exchange_finish(depth=1)" not in result
+ assert "call f2_proxy%halo_exchange_start(depth=1)" in result
+ assert "call f2_proxy%halo_exchange_finish(depth=1)" in result
+ assert "if (m1_proxy%is_dirty(depth=1)) then" in result
+ assert "call m1_proxy%halo_exchange(depth=1)" in result
#
- loop = schedule.children[1]
+ loop = schedule.walk(Loop)[1]
rc_trans.apply(loop, {"depth": 1})
result = str(psy.gen)
- assert "CALL f1_proxy%halo_exchange_start(depth=1)" not in result
- assert "CALL f1_proxy%halo_exchange_finish(depth=1)" not in result
- assert "CALL f2_proxy%halo_exchange_start(depth=1)" not in result
- assert "CALL f2_proxy%halo_exchange_finish(depth=1)" not in result
- assert "IF (m1_proxy%is_dirty(depth=1)) THEN" in result
- assert "CALL m1_proxy%halo_exchange(depth=1)" in result
+ assert "call f1_proxy%halo_exchange_start(depth=1)" not in result
+ assert "call f1_proxy%halo_exchange_finish(depth=1)" not in result
+ assert "call f2_proxy%halo_exchange_start(depth=1)" not in result
+ assert "call f2_proxy%halo_exchange_finish(depth=1)" not in result
+ assert "if (m1_proxy%is_dirty(depth=1)) then" in result
+ assert "call m1_proxy%halo_exchange(depth=1)" in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -7078,17 +7010,17 @@ def test_rc_redund_async_halo_exchange(monkeypatch, tmpdir):
ahex_trans.apply(m2_hex)
result = str(psy.gen)
assert (
- " IF (m2_proxy%is_dirty(depth=2)) THEN\n"
- " CALL m2_proxy%halo_exchange_start(depth=2)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=2)) THEN\n"
- " CALL m2_proxy%halo_exchange_finish(depth=2)\n"
- " END IF\n") in result
+ " if (m2_proxy%is_dirty(depth=2)) then\n"
+ " call m2_proxy%halo_exchange_start(depth=2)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=2)) then\n"
+ " call m2_proxy%halo_exchange_finish(depth=2)\n"
+ " end if\n") in result
assert (
- " ! Set halos dirty/clean for fields modified in the above loop\n"
- " !\n"
- " CALL m2_proxy%set_dirty()\n"
- " CALL m2_proxy%set_clean(2)\n") in result
+ " ! Set halos dirty/clean for fields modified in the above "
+ "loop(s)\n"
+ " call m2_proxy%set_dirty()\n"
+ " call m2_proxy%set_clean(2)\n") in result
# move m2 async halo exchange start and end then check depths and
# set clean are still generated correctly for m2
@@ -7097,40 +7029,39 @@ def test_rc_redund_async_halo_exchange(monkeypatch, tmpdir):
mtrans.apply(schedule.children[6], schedule.children[2])
result = str(psy.gen)
assert (
- " IF (m2_proxy%is_dirty(depth=2)) THEN\n"
- " CALL m2_proxy%halo_exchange_start(depth=2)\n"
- " END IF\n") in result
+ " if (m2_proxy%is_dirty(depth=2)) then\n"
+ " call m2_proxy%halo_exchange_start(depth=2)\n"
+ " end if\n") in result
assert (
- " IF (m2_proxy%is_dirty(depth=2)) THEN\n"
- " CALL m2_proxy%halo_exchange_finish(depth=2)\n"
- " END IF\n") in result
+ " if (m2_proxy%is_dirty(depth=2)) then\n"
+ " call m2_proxy%halo_exchange_finish(depth=2)\n"
+ " end if\n") in result
assert (
- " ! Set halos dirty/clean for fields modified in the above loop\n"
- " !\n"
- " CALL m2_proxy%set_dirty()\n"
- " CALL m2_proxy%set_clean(2)\n") in result
+ " ! Set halos dirty/clean for fields modified in the above "
+ "loop(s)\n"
+ " call m2_proxy%set_dirty()\n"
+ " call m2_proxy%set_clean(2)\n") in result
# increase depth of redundant computation. We do this to all loops
# to remove halo exchanges for f1 and f2 just because we can :-)
# Check depths and set clean are still generated correctly for m2
rc_trans = Dynamo0p3RedundantComputationTrans()
- for index in [7, 1, 3]:
- loop = schedule.children[index]
+ for loop in schedule.walk(Loop):
rc_trans.apply(loop, {"depth": 3})
result = str(psy.gen)
assert (
- " IF (m2_proxy%is_dirty(depth=3)) THEN\n"
- " CALL m2_proxy%halo_exchange_start(depth=3)\n"
- " END IF\n") in result
+ " if (m2_proxy%is_dirty(depth=3)) then\n"
+ " call m2_proxy%halo_exchange_start(depth=3)\n"
+ " end if\n") in result
assert (
- " IF (m2_proxy%is_dirty(depth=3)) THEN\n"
- " CALL m2_proxy%halo_exchange_finish(depth=3)\n"
- " END IF\n") in result
+ " if (m2_proxy%is_dirty(depth=3)) then\n"
+ " call m2_proxy%halo_exchange_finish(depth=3)\n"
+ " end if\n") in result
assert (
- " ! Set halos dirty/clean for fields modified in the above loop\n"
- " !\n"
- " CALL m2_proxy%set_dirty()\n"
- " CALL m2_proxy%set_clean(3)\n") in result
+ " ! Set halos dirty/clean for fields modified in the above "
+ "loop(s)\n"
+ " call m2_proxy%set_dirty()\n"
+ " call m2_proxy%set_clean(3)\n") in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -7198,17 +7129,17 @@ def test_vector_async_halo_exchange(tmpdir):
result = str(psy.gen)
for index in [1, 2, 3]:
assert (
- f" IF (f1_proxy({index})%is_dirty(depth=1)) THEN\n"
- f" CALL f1_proxy({index})%halo_exchange_start(depth=1)\n"
- f" END IF\n"
- f" IF (f1_proxy({index})%is_dirty(depth=1)) THEN\n"
- f" CALL f1_proxy({index})%halo_exchange_finish(depth=1)\n"
- f" END IF\n") in result
+ f" if (f1_proxy({index})%is_dirty(depth=1)) then\n"
+ f" call f1_proxy({index})%halo_exchange_start(depth=1)\n"
+ f" end if\n"
+ f" if (f1_proxy({index})%is_dirty(depth=1)) then\n"
+ f" call f1_proxy({index})%halo_exchange_finish(depth=1)\n"
+ f" end if\n") in result
assert (
- " CALL f1_proxy(1)%halo_exchange(depth=1)\n"
- " CALL f1_proxy(2)%halo_exchange_start(depth=1)\n"
- " CALL f1_proxy(2)%halo_exchange_finish(depth=1)\n"
- " CALL f1_proxy(3)%halo_exchange(depth=1)\n") in result
+ " call f1_proxy(1)%halo_exchange(depth=1)\n"
+ " call f1_proxy(2)%halo_exchange_start(depth=1)\n"
+ " call f1_proxy(2)%halo_exchange_finish(depth=1)\n"
+ " call f1_proxy(3)%halo_exchange(depth=1)\n") in result
# we are not able to test re-ordering of vector halo exchanges as
# the dependence analysis does not currently support it
@@ -7221,14 +7152,17 @@ def test_vector_async_halo_exchange(tmpdir):
# will be adjacent to each other and will follow 6 haloexchange
# start and end calls.
rc_trans = Dynamo0p3RedundantComputationTrans()
- rc_trans.apply(schedule.children[6], {"depth": 2})
+ rc_trans.apply(schedule.walk(Loop)[0], {"depth": 2})
+ num_init_assignments = len([x for x in schedule.children
+ if isinstance(x, Assignment)])
- assert len(schedule.children) == 8
+ assert len(schedule.children) == 8 + num_init_assignments
for index in [0, 2, 4]:
- assert isinstance(schedule.children[index], LFRicHaloExchangeStart)
- assert isinstance(schedule.children[index+1], LFRicHaloExchangeEnd)
- assert isinstance(schedule.children[6], LFRicLoop)
- assert isinstance(schedule.children[7], LFRicLoop)
+ pos = num_init_assignments + index
+ assert isinstance(schedule.children[pos], LFRicHaloExchangeStart)
+ assert isinstance(schedule.children[pos+1], LFRicHaloExchangeEnd)
+ assert isinstance(schedule.children[num_init_assignments + 6], LFRicLoop)
+ assert isinstance(schedule.children[num_init_assignments + 7], LFRicLoop)
assert LFRicBuild(tmpdir).code_compiles(psy)
diff --git a/src/psyclone/tests/domain/lfric/transformations/lfric_extract_test.py b/src/psyclone/tests/domain/lfric/transformations/lfric_extract_test.py
index c81902b879..ea036c846d 100644
--- a/src/psyclone/tests/domain/lfric/transformations/lfric_extract_test.py
+++ b/src/psyclone/tests/domain/lfric/transformations/lfric_extract_test.py
@@ -303,62 +303,59 @@ def test_single_node_dynamo0p3():
etrans.apply(schedule.children[0])
code = str(psy.gen)
- output = ''' ! ExtractStart
- !
- CALL extract_psy_data%PreStart("single_invoke_psy", \
+ output = '''\
+ CALL extract_psy_data % PreStart("single_invoke_psy", \
"invoke_0_testkern_type-testkern_code-r0", 18, 2)
- CALL extract_psy_data%PreDeclareVariable("a", a)
- CALL extract_psy_data%PreDeclareVariable("f1_data", f1_data)
- CALL extract_psy_data%PreDeclareVariable("f2_data", f2_data)
- CALL extract_psy_data%PreDeclareVariable("loop0_start", loop0_start)
- CALL extract_psy_data%PreDeclareVariable("loop0_stop", loop0_stop)
- CALL extract_psy_data%PreDeclareVariable("m1_data", m1_data)
- CALL extract_psy_data%PreDeclareVariable("m2_data", m2_data)
- CALL extract_psy_data%PreDeclareVariable("map_w1", map_w1)
- CALL extract_psy_data%PreDeclareVariable("map_w2", map_w2)
- CALL extract_psy_data%PreDeclareVariable("map_w3", map_w3)
- CALL extract_psy_data%PreDeclareVariable("ndf_w1", ndf_w1)
- CALL extract_psy_data%PreDeclareVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%PreDeclareVariable("ndf_w3", ndf_w3)
- CALL extract_psy_data%PreDeclareVariable("nlayers_f1", nlayers_f1)
- CALL extract_psy_data%PreDeclareVariable("undf_w1", undf_w1)
- CALL extract_psy_data%PreDeclareVariable("undf_w2", undf_w2)
- CALL extract_psy_data%PreDeclareVariable("undf_w3", undf_w3)
- CALL extract_psy_data%PreDeclareVariable("cell", cell)
- CALL extract_psy_data%PreDeclareVariable("cell_post", cell)
- CALL extract_psy_data%PreDeclareVariable("f1_data_post", f1_data)
- CALL extract_psy_data%PreEndDeclaration
- CALL extract_psy_data%ProvideVariable("a", a)
- CALL extract_psy_data%ProvideVariable("f1_data", f1_data)
- CALL extract_psy_data%ProvideVariable("f2_data", f2_data)
- CALL extract_psy_data%ProvideVariable("loop0_start", loop0_start)
- CALL extract_psy_data%ProvideVariable("loop0_stop", loop0_stop)
- CALL extract_psy_data%ProvideVariable("m1_data", m1_data)
- CALL extract_psy_data%ProvideVariable("m2_data", m2_data)
- CALL extract_psy_data%ProvideVariable("map_w1", map_w1)
- CALL extract_psy_data%ProvideVariable("map_w2", map_w2)
- CALL extract_psy_data%ProvideVariable("map_w3", map_w3)
- CALL extract_psy_data%ProvideVariable("ndf_w1", ndf_w1)
- CALL extract_psy_data%ProvideVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%ProvideVariable("ndf_w3", ndf_w3)
- CALL extract_psy_data%ProvideVariable("nlayers_f1", nlayers_f1)
- CALL extract_psy_data%ProvideVariable("undf_w1", undf_w1)
- CALL extract_psy_data%ProvideVariable("undf_w2", undf_w2)
- CALL extract_psy_data%ProvideVariable("undf_w3", undf_w3)
- CALL extract_psy_data%ProvideVariable("cell", cell)
- CALL extract_psy_data%PreEnd
- DO cell = loop0_start, loop0_stop, 1
- CALL testkern_code(nlayers_f1, a, f1_data, f2_data, ''' + \
+ CALL extract_psy_data % PreDeclareVariable("a", a)
+ CALL extract_psy_data % PreDeclareVariable("f1_data", f1_data)
+ CALL extract_psy_data % PreDeclareVariable("f2_data", f2_data)
+ CALL extract_psy_data % PreDeclareVariable("loop0_start", loop0_start)
+ CALL extract_psy_data % PreDeclareVariable("loop0_stop", loop0_stop)
+ CALL extract_psy_data % PreDeclareVariable("m1_data", m1_data)
+ CALL extract_psy_data % PreDeclareVariable("m2_data", m2_data)
+ CALL extract_psy_data % PreDeclareVariable("map_w1", map_w1)
+ CALL extract_psy_data % PreDeclareVariable("map_w2", map_w2)
+ CALL extract_psy_data % PreDeclareVariable("map_w3", map_w3)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w1", ndf_w1)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w3", ndf_w3)
+ CALL extract_psy_data % PreDeclareVariable("nlayers_f1", nlayers_f1)
+ CALL extract_psy_data % PreDeclareVariable("undf_w1", undf_w1)
+ CALL extract_psy_data % PreDeclareVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % PreDeclareVariable("undf_w3", undf_w3)
+ CALL extract_psy_data % PreDeclareVariable("cell", cell)
+ CALL extract_psy_data % PreDeclareVariable("cell_post", cell)
+ CALL extract_psy_data % PreDeclareVariable("f1_data_post", f1_data)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("a", a)
+ CALL extract_psy_data % ProvideVariable("f1_data", f1_data)
+ CALL extract_psy_data % ProvideVariable("f2_data", f2_data)
+ CALL extract_psy_data % ProvideVariable("loop0_start", loop0_start)
+ CALL extract_psy_data % ProvideVariable("loop0_stop", loop0_stop)
+ CALL extract_psy_data % ProvideVariable("m1_data", m1_data)
+ CALL extract_psy_data % ProvideVariable("m2_data", m2_data)
+ CALL extract_psy_data % ProvideVariable("map_w1", map_w1)
+ CALL extract_psy_data % ProvideVariable("map_w2", map_w2)
+ CALL extract_psy_data % ProvideVariable("map_w3", map_w3)
+ CALL extract_psy_data % ProvideVariable("ndf_w1", ndf_w1)
+ CALL extract_psy_data % ProvideVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % ProvideVariable("ndf_w3", ndf_w3)
+ CALL extract_psy_data % ProvideVariable("nlayers_f1", nlayers_f1)
+ CALL extract_psy_data % ProvideVariable("undf_w1", undf_w1)
+ CALL extract_psy_data % ProvideVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % ProvideVariable("undf_w3", undf_w3)
+ CALL extract_psy_data % ProvideVariable("cell", cell)
+ CALL extract_psy_data % PreEnd
+ do cell = loop0_start, loop0_stop, 1
+ call testkern_code(nlayers_f1, a, f1_data, f2_data, ''' + \
"m1_data, m2_data, ndf_w1, undf_w1, " + \
"map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, " + \
'''undf_w3, map_w3(:,cell))
- END DO
- CALL extract_psy_data%PostStart
- CALL extract_psy_data%ProvideVariable("cell_post", cell)
- CALL extract_psy_data%ProvideVariable("f1_data_post", f1_data)
- CALL extract_psy_data%PostEnd
- !
- ! ExtractEnd'''
+ enddo
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("cell_post", cell)
+ CALL extract_psy_data % ProvideVariable("f1_data_post", f1_data)
+ CALL extract_psy_data % PostEnd'''
assert output in code
@@ -374,68 +371,65 @@ def test_node_list_dynamo0p3():
etrans.apply(schedule.children[0:3])
code = str(psy.gen)
- output = """! ExtractStart
- !
- CALL extract_psy_data%PreStart("single_invoke_builtin_then_kernel_psy", \
+ output = """\
+ CALL extract_psy_data % PreStart("single_invoke_builtin_then_kernel_psy", \
"invoke_0-r0", 15, 5)
- CALL extract_psy_data%PreDeclareVariable("f3_data", f3_data)
- CALL extract_psy_data%PreDeclareVariable("loop0_start", loop0_start)
- CALL extract_psy_data%PreDeclareVariable("loop0_stop", loop0_stop)
- CALL extract_psy_data%PreDeclareVariable("loop1_start", loop1_start)
- CALL extract_psy_data%PreDeclareVariable("loop1_stop", loop1_stop)
- CALL extract_psy_data%PreDeclareVariable("loop2_start", loop2_start)
- CALL extract_psy_data%PreDeclareVariable("loop2_stop", loop2_stop)
- CALL extract_psy_data%PreDeclareVariable("map_w2", map_w2)
- CALL extract_psy_data%PreDeclareVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%PreDeclareVariable("nlayers_f3", nlayers_f3)
- CALL extract_psy_data%PreDeclareVariable("undf_w2", undf_w2)
- CALL extract_psy_data%PreDeclareVariable("cell", cell)
- CALL extract_psy_data%PreDeclareVariable("df", df)
- CALL extract_psy_data%PreDeclareVariable("f2_data", f2_data)
- CALL extract_psy_data%PreDeclareVariable("f5_data", f5_data)
- CALL extract_psy_data%PreDeclareVariable("cell_post", cell)
- CALL extract_psy_data%PreDeclareVariable("df_post", df)
- CALL extract_psy_data%PreDeclareVariable("f2_data_post", f2_data)
- CALL extract_psy_data%PreDeclareVariable("f3_data_post", f3_data)
- CALL extract_psy_data%PreDeclareVariable("f5_data_post", f5_data)
- CALL extract_psy_data%PreEndDeclaration
- CALL extract_psy_data%ProvideVariable("f3_data", f3_data)
- CALL extract_psy_data%ProvideVariable("loop0_start", loop0_start)
- CALL extract_psy_data%ProvideVariable("loop0_stop", loop0_stop)
- CALL extract_psy_data%ProvideVariable("loop1_start", loop1_start)
- CALL extract_psy_data%ProvideVariable("loop1_stop", loop1_stop)
- CALL extract_psy_data%ProvideVariable("loop2_start", loop2_start)
- CALL extract_psy_data%ProvideVariable("loop2_stop", loop2_stop)
- CALL extract_psy_data%ProvideVariable("map_w2", map_w2)
- CALL extract_psy_data%ProvideVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%ProvideVariable("nlayers_f3", nlayers_f3)
- CALL extract_psy_data%ProvideVariable("undf_w2", undf_w2)
- CALL extract_psy_data%ProvideVariable("cell", cell)
- CALL extract_psy_data%ProvideVariable("df", df)
- CALL extract_psy_data%ProvideVariable("f2_data", f2_data)
- CALL extract_psy_data%ProvideVariable("f5_data", f5_data)
- CALL extract_psy_data%PreEnd
- DO df = loop0_start, loop0_stop, 1
- ! Built-in: setval_c (set a real-valued field to a real scalar value)
- f5_data(df) = 0.0
- END DO
- DO df = loop1_start, loop1_stop, 1
- ! Built-in: setval_c (set a real-valued field to a real scalar value)
- f2_data(df) = 0.0
- END DO
- DO cell = loop2_start, loop2_stop, 1
- CALL testkern_w2_only_code(nlayers_f3, f3_data, """ + \
+ CALL extract_psy_data % PreDeclareVariable("f3_data", f3_data)
+ CALL extract_psy_data % PreDeclareVariable("loop0_start", loop0_start)
+ CALL extract_psy_data % PreDeclareVariable("loop0_stop", loop0_stop)
+ CALL extract_psy_data % PreDeclareVariable("loop1_start", loop1_start)
+ CALL extract_psy_data % PreDeclareVariable("loop1_stop", loop1_stop)
+ CALL extract_psy_data % PreDeclareVariable("loop2_start", loop2_start)
+ CALL extract_psy_data % PreDeclareVariable("loop2_stop", loop2_stop)
+ CALL extract_psy_data % PreDeclareVariable("map_w2", map_w2)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % PreDeclareVariable("nlayers_f3", nlayers_f3)
+ CALL extract_psy_data % PreDeclareVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % PreDeclareVariable("cell", cell)
+ CALL extract_psy_data % PreDeclareVariable("df", df)
+ CALL extract_psy_data % PreDeclareVariable("f2_data", f2_data)
+ CALL extract_psy_data % PreDeclareVariable("f5_data", f5_data)
+ CALL extract_psy_data % PreDeclareVariable("cell_post", cell)
+ CALL extract_psy_data % PreDeclareVariable("df_post", df)
+ CALL extract_psy_data % PreDeclareVariable("f2_data_post", f2_data)
+ CALL extract_psy_data % PreDeclareVariable("f3_data_post", f3_data)
+ CALL extract_psy_data % PreDeclareVariable("f5_data_post", f5_data)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("f3_data", f3_data)
+ CALL extract_psy_data % ProvideVariable("loop0_start", loop0_start)
+ CALL extract_psy_data % ProvideVariable("loop0_stop", loop0_stop)
+ CALL extract_psy_data % ProvideVariable("loop1_start", loop1_start)
+ CALL extract_psy_data % ProvideVariable("loop1_stop", loop1_stop)
+ CALL extract_psy_data % ProvideVariable("loop2_start", loop2_start)
+ CALL extract_psy_data % ProvideVariable("loop2_stop", loop2_stop)
+ CALL extract_psy_data % ProvideVariable("map_w2", map_w2)
+ CALL extract_psy_data % ProvideVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % ProvideVariable("nlayers_f3", nlayers_f3)
+ CALL extract_psy_data % ProvideVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % ProvideVariable("cell", cell)
+ CALL extract_psy_data % ProvideVariable("df", df)
+ CALL extract_psy_data % ProvideVariable("f2_data", f2_data)
+ CALL extract_psy_data % ProvideVariable("f5_data", f5_data)
+ CALL extract_psy_data % PreEnd
+ do df = loop0_start, loop0_stop, 1
+ ! Built-in: setval_c (set a real-valued field to a real scalar value)
+ f5_data(df) = 0.0
+ enddo
+ do df = loop1_start, loop1_stop, 1
+ ! Built-in: setval_c (set a real-valued field to a real scalar value)
+ f2_data(df) = 0.0
+ enddo
+ do cell = loop2_start, loop2_stop, 1
+ call testkern_w2_only_code(nlayers_f3, f3_data, """ + \
"""f2_data, ndf_w2, undf_w2, map_w2(:,cell))
- END DO
- CALL extract_psy_data%PostStart
- CALL extract_psy_data%ProvideVariable("cell_post", cell)
- CALL extract_psy_data%ProvideVariable("df_post", df)
- CALL extract_psy_data%ProvideVariable("f2_data_post", f2_data)
- CALL extract_psy_data%ProvideVariable("f3_data_post", f3_data)
- CALL extract_psy_data%ProvideVariable("f5_data_post", f5_data)
- CALL extract_psy_data%PostEnd
- !
- ! ExtractEnd"""
+ enddo
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("cell_post", cell)
+ CALL extract_psy_data % ProvideVariable("df_post", df)
+ CALL extract_psy_data % ProvideVariable("f2_data_post", f2_data)
+ CALL extract_psy_data % ProvideVariable("f3_data_post", f3_data)
+ CALL extract_psy_data % ProvideVariable("f5_data_post", f5_data)
+ CALL extract_psy_data % PostEnd"""
assert output in code
@@ -452,63 +446,66 @@ def test_dynamo0p3_builtin():
etrans.apply(schedule.children[0:3])
code = str(psy.gen)
- output = """CALL extract_psy_data%PreDeclareVariable("loop1_start", """\
- """loop1_start)
- CALL extract_psy_data%PreDeclareVariable("loop1_stop", loop1_stop)
- CALL extract_psy_data%PreDeclareVariable("loop2_start", loop2_start)
- CALL extract_psy_data%PreDeclareVariable("loop2_stop", loop2_stop)
- CALL extract_psy_data%PreDeclareVariable("map_w2", map_w2)
- CALL extract_psy_data%PreDeclareVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%PreDeclareVariable("nlayers_f3", nlayers_f3)
- CALL extract_psy_data%PreDeclareVariable("undf_w2", undf_w2)
- CALL extract_psy_data%PreDeclareVariable("cell", cell)
- CALL extract_psy_data%PreDeclareVariable("df", df)
- CALL extract_psy_data%PreDeclareVariable("f2_data", f2_data)
- CALL extract_psy_data%PreDeclareVariable("f5_data", f5_data)
- CALL extract_psy_data%PreDeclareVariable("cell_post", cell)
- CALL extract_psy_data%PreDeclareVariable("df_post", df)
- CALL extract_psy_data%PreDeclareVariable("f2_data_post", f2_data)
- CALL extract_psy_data%PreDeclareVariable("f3_data_post", f3_data)
- CALL extract_psy_data%PreDeclareVariable("f5_data_post", f5_data)
- CALL extract_psy_data%PreEndDeclaration
- CALL extract_psy_data%ProvideVariable("f3_data", f3_data)
- CALL extract_psy_data%ProvideVariable("loop0_start", loop0_start)
- CALL extract_psy_data%ProvideVariable("loop0_stop", loop0_stop)
- CALL extract_psy_data%ProvideVariable("loop1_start", loop1_start)
- CALL extract_psy_data%ProvideVariable("loop1_stop", loop1_stop)
- CALL extract_psy_data%ProvideVariable("loop2_start", loop2_start)
- CALL extract_psy_data%ProvideVariable("loop2_stop", loop2_stop)
- CALL extract_psy_data%ProvideVariable("map_w2", map_w2)
- CALL extract_psy_data%ProvideVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%ProvideVariable("nlayers_f3", nlayers_f3)
- CALL extract_psy_data%ProvideVariable("undf_w2", undf_w2)
- CALL extract_psy_data%ProvideVariable("cell", cell)
- CALL extract_psy_data%ProvideVariable("df", df)
- CALL extract_psy_data%ProvideVariable("f2_data", f2_data)
- CALL extract_psy_data%ProvideVariable("f5_data", f5_data)
- CALL extract_psy_data%PreEnd
- DO df = loop0_start, loop0_stop, 1
- ! Built-in: setval_c (set a real-valued field to a real scalar value)
- f5_data(df) = 0.0
- END DO
- DO df = loop1_start, loop1_stop, 1
- ! Built-in: setval_c (set a real-valued field to a real scalar value)
- f2_data(df) = 0.0
- END DO
- DO cell = loop2_start, loop2_stop, 1
- CALL testkern_w2_only_code(nlayers_f3, f3_data, f2_data, """\
+ output = """
+ CALL extract_psy_data % PreDeclareVariable("f3_data", f3_data)
+ CALL extract_psy_data % PreDeclareVariable("loop0_start", loop0_start)
+ CALL extract_psy_data % PreDeclareVariable("loop0_stop", loop0_stop)
+ CALL extract_psy_data % PreDeclareVariable("loop1_start", loop1_start)
+ CALL extract_psy_data % PreDeclareVariable("loop1_stop", loop1_stop)
+ CALL extract_psy_data % PreDeclareVariable("loop2_start", loop2_start)
+ CALL extract_psy_data % PreDeclareVariable("loop2_stop", loop2_stop)
+ CALL extract_psy_data % PreDeclareVariable("map_w2", map_w2)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % PreDeclareVariable("nlayers_f3", nlayers_f3)
+ CALL extract_psy_data % PreDeclareVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % PreDeclareVariable("cell", cell)
+ CALL extract_psy_data % PreDeclareVariable("df", df)
+ CALL extract_psy_data % PreDeclareVariable("f2_data", f2_data)
+ CALL extract_psy_data % PreDeclareVariable("f5_data", f5_data)
+ CALL extract_psy_data % PreDeclareVariable("cell_post", cell)
+ CALL extract_psy_data % PreDeclareVariable("df_post", df)
+ CALL extract_psy_data % PreDeclareVariable("f2_data_post", f2_data)
+ CALL extract_psy_data % PreDeclareVariable("f3_data_post", f3_data)
+ CALL extract_psy_data % PreDeclareVariable("f5_data_post", f5_data)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("f3_data", f3_data)
+ CALL extract_psy_data % ProvideVariable("loop0_start", loop0_start)
+ CALL extract_psy_data % ProvideVariable("loop0_stop", loop0_stop)
+ CALL extract_psy_data % ProvideVariable("loop1_start", loop1_start)
+ CALL extract_psy_data % ProvideVariable("loop1_stop", loop1_stop)
+ CALL extract_psy_data % ProvideVariable("loop2_start", loop2_start)
+ CALL extract_psy_data % ProvideVariable("loop2_stop", loop2_stop)
+ CALL extract_psy_data % ProvideVariable("map_w2", map_w2)
+ CALL extract_psy_data % ProvideVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % ProvideVariable("nlayers_f3", nlayers_f3)
+ CALL extract_psy_data % ProvideVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % ProvideVariable("cell", cell)
+ CALL extract_psy_data % ProvideVariable("df", df)
+ CALL extract_psy_data % ProvideVariable("f2_data", f2_data)
+ CALL extract_psy_data % ProvideVariable("f5_data", f5_data)
+ CALL extract_psy_data % PreEnd
+ do df = loop0_start, loop0_stop, 1
+ ! Built-in: setval_c (set a real-valued field to a real scalar value)
+ f5_data(df) = 0.0
+ enddo
+ do df = loop1_start, loop1_stop, 1
+ ! Built-in: setval_c (set a real-valued field to a real scalar value)
+ f2_data(df) = 0.0
+ enddo
+ do cell = loop2_start, loop2_stop, 1
+ call testkern_w2_only_code(nlayers_f3, f3_data, f2_data, """\
"""ndf_w2, undf_w2, map_w2(:,cell))
- END DO
- CALL extract_psy_data%PostStart
- CALL extract_psy_data%ProvideVariable("cell_post", cell)
- CALL extract_psy_data%ProvideVariable("df_post", df)
- CALL extract_psy_data%ProvideVariable("f2_data_post", f2_data)
- CALL extract_psy_data%ProvideVariable("f3_data_post", f3_data)
- CALL extract_psy_data%ProvideVariable("f5_data_post", f5_data)
- CALL extract_psy_data%PostEnd
- !
- ! ExtractEnd"""
+ enddo
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("cell_post", cell)
+ CALL extract_psy_data % ProvideVariable("df_post", df)
+ CALL extract_psy_data % ProvideVariable("f2_data_post", f2_data)
+ CALL extract_psy_data % ProvideVariable("f3_data_post", f3_data)
+ CALL extract_psy_data % ProvideVariable("f5_data_post", f5_data)
+ CALL extract_psy_data % PostEnd"""
assert output in code
+ # TODO #706: Compilation for LFRic extraction not supported yet.
+ # assert LFRicBuild(tmpdir).code_compiles(psy)
def test_extract_single_builtin_dynamo0p3():
@@ -526,32 +523,30 @@ def test_extract_single_builtin_dynamo0p3():
etrans.apply(schedule.children[1])
code = str(psy.gen)
- output = """! ExtractStart
- !
- CALL extract_psy_data%PreStart("single_invoke_builtin_then_kernel_psy", """ \
- """"invoke_0-setval_c-r0", 4, 2)
- CALL extract_psy_data%PreDeclareVariable("loop1_start", loop1_start)
- CALL extract_psy_data%PreDeclareVariable("loop1_stop", loop1_stop)
- CALL extract_psy_data%PreDeclareVariable("df", df)
- CALL extract_psy_data%PreDeclareVariable("f2_data", f2_data)
- CALL extract_psy_data%PreDeclareVariable("df_post", df)
- CALL extract_psy_data%PreDeclareVariable("f2_data_post", f2_data)
- CALL extract_psy_data%PreEndDeclaration
- CALL extract_psy_data%ProvideVariable("loop1_start", loop1_start)
- CALL extract_psy_data%ProvideVariable("loop1_stop", loop1_stop)
- CALL extract_psy_data%ProvideVariable("df", df)
- CALL extract_psy_data%ProvideVariable("f2_data", f2_data)
- CALL extract_psy_data%PreEnd
- DO df = loop1_start, loop1_stop, 1
- ! Built-in: setval_c (set a real-valued field to a real scalar value)
- f2_data(df) = 0.0
- END DO
- CALL extract_psy_data%PostStart
- CALL extract_psy_data%ProvideVariable("df_post", df)
- CALL extract_psy_data%ProvideVariable("f2_data_post", f2_data)
- CALL extract_psy_data%PostEnd
- !
- ! ExtractEnd"""
+ output = """\
+ CALL extract_psy_data % PreStart("single_invoke_builtin_then_kernel_psy", \
+"invoke_0-setval_c-r0", 4, 2)
+ CALL extract_psy_data % PreDeclareVariable("loop1_start", loop1_start)
+ CALL extract_psy_data % PreDeclareVariable("loop1_stop", loop1_stop)
+ CALL extract_psy_data % PreDeclareVariable("df", df)
+ CALL extract_psy_data % PreDeclareVariable("f2_data", f2_data)
+ CALL extract_psy_data % PreDeclareVariable("df_post", df)
+ CALL extract_psy_data % PreDeclareVariable("f2_data_post", f2_data)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("loop1_start", loop1_start)
+ CALL extract_psy_data % ProvideVariable("loop1_stop", loop1_stop)
+ CALL extract_psy_data % ProvideVariable("df", df)
+ CALL extract_psy_data % ProvideVariable("f2_data", f2_data)
+ CALL extract_psy_data % PreEnd
+ do df = loop1_start, loop1_stop, 1
+ ! Built-in: setval_c (set a real-valued field to a real scalar value)
+ f2_data(df) = 0.0
+ enddo
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("df_post", df)
+ CALL extract_psy_data % ProvideVariable("f2_data_post", f2_data)
+ CALL extract_psy_data % PostEnd
+ """
assert output in code
# Test extract with OMP Parallel optimisation
@@ -562,37 +557,33 @@ def test_extract_single_builtin_dynamo0p3():
otrans.apply(schedule.children[1])
etrans.apply(schedule.children[1])
code_omp = str(psy.gen)
- output = """
- ! ExtractStart
- !
- CALL extract_psy_data%PreStart("single_invoke_psy", """ \
- """"invoke_0-inc_ax_plus_y-r0", 5, 2)
- CALL extract_psy_data%PreDeclareVariable("f1_data", f1_data)
- CALL extract_psy_data%PreDeclareVariable("f2_data", f2_data)
- CALL extract_psy_data%PreDeclareVariable("loop1_start", loop1_start)
- CALL extract_psy_data%PreDeclareVariable("loop1_stop", loop1_stop)
- CALL extract_psy_data%PreDeclareVariable("df", df)
- CALL extract_psy_data%PreDeclareVariable("df_post", df)
- CALL extract_psy_data%PreDeclareVariable("f1_data_post", f1_data)
- CALL extract_psy_data%PreEndDeclaration
- CALL extract_psy_data%ProvideVariable("f1_data", f1_data)
- CALL extract_psy_data%ProvideVariable("f2_data", f2_data)
- CALL extract_psy_data%ProvideVariable("loop1_start", loop1_start)
- CALL extract_psy_data%ProvideVariable("loop1_stop", loop1_stop)
- CALL extract_psy_data%ProvideVariable("df", df)
- CALL extract_psy_data%PreEnd
- !$omp parallel do default(shared), private(df), schedule(static)
- DO df = loop1_start, loop1_stop, 1
- ! Built-in: inc_aX_plus_Y (real-valued fields)
- f1_data(df) = 0.5_r_def * f1_data(df) + f2_data(df)
- END DO
- !$omp end parallel do
- CALL extract_psy_data%PostStart
- CALL extract_psy_data%ProvideVariable("df_post", df)
- CALL extract_psy_data%ProvideVariable("f1_data_post", f1_data)
- CALL extract_psy_data%PostEnd
- !
- ! ExtractEnd"""
+ output = """\
+ CALL extract_psy_data % PreStart("single_invoke_psy", \
+"invoke_0-inc_ax_plus_y-r0", 5, 2)
+ CALL extract_psy_data % PreDeclareVariable("f1_data", f1_data)
+ CALL extract_psy_data % PreDeclareVariable("f2_data", f2_data)
+ CALL extract_psy_data % PreDeclareVariable("loop1_start", loop1_start)
+ CALL extract_psy_data % PreDeclareVariable("loop1_stop", loop1_stop)
+ CALL extract_psy_data % PreDeclareVariable("df", df)
+ CALL extract_psy_data % PreDeclareVariable("df_post", df)
+ CALL extract_psy_data % PreDeclareVariable("f1_data_post", f1_data)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("f1_data", f1_data)
+ CALL extract_psy_data % ProvideVariable("f2_data", f2_data)
+ CALL extract_psy_data % ProvideVariable("loop1_start", loop1_start)
+ CALL extract_psy_data % ProvideVariable("loop1_stop", loop1_stop)
+ CALL extract_psy_data % ProvideVariable("df", df)
+ CALL extract_psy_data % PreEnd
+ !$omp parallel do default(shared), private(df), schedule(static)
+ do df = loop1_start, loop1_stop, 1
+ ! Built-in: inc_aX_plus_Y (real-valued fields)
+ f1_data(df) = 0.5_r_def * f1_data(df) + f2_data(df)
+ enddo
+ !$omp end parallel do
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("df_post", df)
+ CALL extract_psy_data % ProvideVariable("f1_data_post", f1_data)
+ CALL extract_psy_data % PostEnd"""
assert output in code_omp
@@ -609,57 +600,53 @@ def test_extract_kernel_and_builtin_dynamo0p3():
etrans.apply(schedule.children[1:3])
code = str(psy.gen)
- output = """
- ! ExtractStart
- !
- CALL extract_psy_data%PreStart("single_invoke_builtin_then_kernel_psy", """ \
- """"invoke_0-r0", 12, 4)
- CALL extract_psy_data%PreDeclareVariable("f3_data", f3_data)
- CALL extract_psy_data%PreDeclareVariable("loop1_start", loop1_start)
- CALL extract_psy_data%PreDeclareVariable("loop1_stop", loop1_stop)
- CALL extract_psy_data%PreDeclareVariable("loop2_start", loop2_start)
- CALL extract_psy_data%PreDeclareVariable("loop2_stop", loop2_stop)
- CALL extract_psy_data%PreDeclareVariable("map_w2", map_w2)
- CALL extract_psy_data%PreDeclareVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%PreDeclareVariable("nlayers_f3", nlayers_f3)
- CALL extract_psy_data%PreDeclareVariable("undf_w2", undf_w2)
- CALL extract_psy_data%PreDeclareVariable("cell", cell)
- CALL extract_psy_data%PreDeclareVariable("df", df)
- CALL extract_psy_data%PreDeclareVariable("f2_data", f2_data)
- CALL extract_psy_data%PreDeclareVariable("cell_post", cell)
- CALL extract_psy_data%PreDeclareVariable("df_post", df)
- CALL extract_psy_data%PreDeclareVariable("f2_data_post", f2_data)
- CALL extract_psy_data%PreDeclareVariable("f3_data_post", f3_data)
- CALL extract_psy_data%PreEndDeclaration
- CALL extract_psy_data%ProvideVariable("f3_data", f3_data)
- CALL extract_psy_data%ProvideVariable("loop1_start", loop1_start)
- CALL extract_psy_data%ProvideVariable("loop1_stop", loop1_stop)
- CALL extract_psy_data%ProvideVariable("loop2_start", loop2_start)
- CALL extract_psy_data%ProvideVariable("loop2_stop", loop2_stop)
- CALL extract_psy_data%ProvideVariable("map_w2", map_w2)
- CALL extract_psy_data%ProvideVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%ProvideVariable("nlayers_f3", nlayers_f3)
- CALL extract_psy_data%ProvideVariable("undf_w2", undf_w2)
- CALL extract_psy_data%ProvideVariable("cell", cell)
- CALL extract_psy_data%ProvideVariable("df", df)
- CALL extract_psy_data%ProvideVariable("f2_data", f2_data)
- CALL extract_psy_data%PreEnd
- DO df = loop1_start, loop1_stop, 1
- ! Built-in: setval_c (set a real-valued field to a real scalar value)
- f2_data(df) = 0.0
- END DO
- DO cell = loop2_start, loop2_stop, 1
- CALL testkern_w2_only_code(nlayers_f3, f3_data, """ + \
- """f2_data, ndf_w2, undf_w2, map_w2(:,cell))
- END DO
- CALL extract_psy_data%PostStart
- CALL extract_psy_data%ProvideVariable("cell_post", cell)
- CALL extract_psy_data%ProvideVariable("df_post", df)
- CALL extract_psy_data%ProvideVariable("f2_data_post", f2_data)
- CALL extract_psy_data%ProvideVariable("f3_data_post", f3_data)
- CALL extract_psy_data%PostEnd
- !
- ! ExtractEnd"""
+ output = """\
+ CALL extract_psy_data % PreStart("single_invoke_builtin_then_kernel_psy", \
+"invoke_0-r0", 12, 4)
+ CALL extract_psy_data % PreDeclareVariable("f3_data", f3_data)
+ CALL extract_psy_data % PreDeclareVariable("loop1_start", loop1_start)
+ CALL extract_psy_data % PreDeclareVariable("loop1_stop", loop1_stop)
+ CALL extract_psy_data % PreDeclareVariable("loop2_start", loop2_start)
+ CALL extract_psy_data % PreDeclareVariable("loop2_stop", loop2_stop)
+ CALL extract_psy_data % PreDeclareVariable("map_w2", map_w2)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % PreDeclareVariable("nlayers_f3", nlayers_f3)
+ CALL extract_psy_data % PreDeclareVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % PreDeclareVariable("cell", cell)
+ CALL extract_psy_data % PreDeclareVariable("df", df)
+ CALL extract_psy_data % PreDeclareVariable("f2_data", f2_data)
+ CALL extract_psy_data % PreDeclareVariable("cell_post", cell)
+ CALL extract_psy_data % PreDeclareVariable("df_post", df)
+ CALL extract_psy_data % PreDeclareVariable("f2_data_post", f2_data)
+ CALL extract_psy_data % PreDeclareVariable("f3_data_post", f3_data)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("f3_data", f3_data)
+ CALL extract_psy_data % ProvideVariable("loop1_start", loop1_start)
+ CALL extract_psy_data % ProvideVariable("loop1_stop", loop1_stop)
+ CALL extract_psy_data % ProvideVariable("loop2_start", loop2_start)
+ CALL extract_psy_data % ProvideVariable("loop2_stop", loop2_stop)
+ CALL extract_psy_data % ProvideVariable("map_w2", map_w2)
+ CALL extract_psy_data % ProvideVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % ProvideVariable("nlayers_f3", nlayers_f3)
+ CALL extract_psy_data % ProvideVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % ProvideVariable("cell", cell)
+ CALL extract_psy_data % ProvideVariable("df", df)
+ CALL extract_psy_data % ProvideVariable("f2_data", f2_data)
+ CALL extract_psy_data % PreEnd
+ do df = loop1_start, loop1_stop, 1
+ ! Built-in: setval_c (set a real-valued field to a real scalar value)
+ f2_data(df) = 0.0
+ enddo
+ do cell = loop2_start, loop2_stop, 1
+ call testkern_w2_only_code(nlayers_f3, f3_data, f2_data, ndf_w2, \
+undf_w2, map_w2(:,cell))
+ enddo
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("cell_post", cell)
+ CALL extract_psy_data % ProvideVariable("df_post", df)
+ CALL extract_psy_data % ProvideVariable("f2_data_post", f2_data)
+ CALL extract_psy_data % ProvideVariable("f3_data_post", f3_data)
+ CALL extract_psy_data % PostEnd"""
assert output in code
@@ -667,7 +654,7 @@ def test_extract_kernel_and_builtin_dynamo0p3():
# assert LFRicBuild(tmpdir).code_compiles(psy)
-def test_extract_colouring_omp_dynamo0p3():
+def test_extract_colouring_omp_dynamo0p3(fortran_writer):
''' Test that extraction of a Kernel in an Invoke after applying
colouring and OpenMP optimisations produces the correct result
in Dynamo0.3 API. '''
@@ -703,89 +690,81 @@ def test_extract_colouring_omp_dynamo0p3():
code = str(psy.gen)
output = ("""
- ! ExtractStart
- !
- CALL extract_psy_data%PreStart("multikernel_invokes_7_psy", """
- """"invoke_0-ru_code-r0", 32, 3)
- CALL extract_psy_data%PreDeclareVariable("a_data", a_data)
- CALL extract_psy_data%PreDeclareVariable("b_data", b_data)
- CALL extract_psy_data%PreDeclareVariable("basis_w0_qr", basis_w0_qr)
- CALL extract_psy_data%PreDeclareVariable("basis_w2_qr", basis_w2_qr)
- CALL extract_psy_data%PreDeclareVariable("basis_w3_qr", basis_w3_qr)
- CALL extract_psy_data%PreDeclareVariable("c_data", c_data)
- CALL extract_psy_data%PreDeclareVariable("cmap", cmap)
- CALL extract_psy_data%PreDeclareVariable("diff_basis_w0_qr", """
- """diff_basis_w0_qr)
- CALL extract_psy_data%PreDeclareVariable("diff_basis_w2_qr", """
- """diff_basis_w2_qr)
- CALL extract_psy_data%PreDeclareVariable("e", e)
- CALL extract_psy_data%PreDeclareVariable("istp", istp)
- CALL extract_psy_data%PreDeclareVariable("last_edge_cell_all_colours", \
-last_edge_cell_all_colours)
- CALL extract_psy_data%PreDeclareVariable("loop4_start", loop4_start)
- CALL extract_psy_data%PreDeclareVariable("loop4_stop", loop4_stop)
- CALL extract_psy_data%PreDeclareVariable("loop5_start", loop5_start)
- CALL extract_psy_data%PreDeclareVariable("map_w0", map_w0)
- CALL extract_psy_data%PreDeclareVariable("map_w2", map_w2)
- CALL extract_psy_data%PreDeclareVariable("map_w3", map_w3)
- CALL extract_psy_data%PreDeclareVariable("ndf_w0", ndf_w0)
- CALL extract_psy_data%PreDeclareVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%PreDeclareVariable("ndf_w3", ndf_w3)
- CALL extract_psy_data%PreDeclareVariable("nlayers_b", nlayers_b)
- CALL extract_psy_data%PreDeclareVariable("np_xy_qr", np_xy_qr)
- CALL extract_psy_data%PreDeclareVariable("np_z_qr", np_z_qr)
- CALL extract_psy_data%PreDeclareVariable("rdt", rdt)
- CALL extract_psy_data%PreDeclareVariable("undf_w0", undf_w0)
- CALL extract_psy_data%PreDeclareVariable("undf_w2", undf_w2)
- CALL extract_psy_data%PreDeclareVariable("undf_w3", undf_w3)
- CALL extract_psy_data%PreDeclareVariable("weights_xy_qr", weights_xy_qr)
- CALL extract_psy_data%PreDeclareVariable("weights_z_qr", weights_z_qr)
- CALL extract_psy_data%PreDeclareVariable("cell", cell)
- CALL extract_psy_data%PreDeclareVariable("colour", colour)
- CALL extract_psy_data%PreDeclareVariable("b_data_post", b_data)
- CALL extract_psy_data%PreDeclareVariable("cell_post", cell)
- CALL extract_psy_data%PreDeclareVariable("colour_post", colour)
- CALL extract_psy_data%PreEndDeclaration
- CALL extract_psy_data%ProvideVariable("a_data", a_data)
- CALL extract_psy_data%ProvideVariable("b_data", b_data)
- CALL extract_psy_data%ProvideVariable("basis_w0_qr", basis_w0_qr)
- CALL extract_psy_data%ProvideVariable("basis_w2_qr", basis_w2_qr)
- CALL extract_psy_data%ProvideVariable("basis_w3_qr", basis_w3_qr)
- CALL extract_psy_data%ProvideVariable("c_data", c_data)
- CALL extract_psy_data%ProvideVariable("cmap", cmap)
- CALL extract_psy_data%ProvideVariable("diff_basis_w0_qr", """
- """diff_basis_w0_qr)
- CALL extract_psy_data%ProvideVariable("diff_basis_w2_qr", """
- """diff_basis_w2_qr)
- CALL extract_psy_data%ProvideVariable("e", e)
- CALL extract_psy_data%ProvideVariable("istp", istp)
- CALL extract_psy_data%ProvideVariable("last_edge_cell_all_colours", \
-last_edge_cell_all_colours)
- CALL extract_psy_data%ProvideVariable("loop4_start", loop4_start)
- CALL extract_psy_data%ProvideVariable("loop4_stop", loop4_stop)
- CALL extract_psy_data%ProvideVariable("loop5_start", loop5_start)
- CALL extract_psy_data%ProvideVariable("map_w0", map_w0)
- CALL extract_psy_data%ProvideVariable("map_w2", map_w2)
- CALL extract_psy_data%ProvideVariable("map_w3", map_w3)
- CALL extract_psy_data%ProvideVariable("ndf_w0", ndf_w0)
- CALL extract_psy_data%ProvideVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%ProvideVariable("ndf_w3", ndf_w3)
- CALL extract_psy_data%ProvideVariable("nlayers_b", nlayers_b)
- CALL extract_psy_data%ProvideVariable("np_xy_qr", np_xy_qr)
- CALL extract_psy_data%ProvideVariable("np_z_qr", np_z_qr)
- CALL extract_psy_data%ProvideVariable("rdt", rdt)
- CALL extract_psy_data%ProvideVariable("undf_w0", undf_w0)
- CALL extract_psy_data%ProvideVariable("undf_w2", undf_w2)
- CALL extract_psy_data%ProvideVariable("undf_w3", undf_w3)
- CALL extract_psy_data%ProvideVariable("weights_xy_qr", weights_xy_qr)
- CALL extract_psy_data%ProvideVariable("weights_z_qr", weights_z_qr)
- CALL extract_psy_data%ProvideVariable("cell", cell)
- CALL extract_psy_data%ProvideVariable("colour", colour)
- CALL extract_psy_data%PreEnd
- DO colour = loop4_start, loop4_stop, 1
- !$omp parallel do default(shared), private(cell), schedule(static)
- DO cell = loop5_start, last_edge_cell_all_colours(colour), 1
- CALL ru_code(nlayers_b, b_data, a_data, istp, rdt, """
+ CALL extract_psy_data % PreStart("multikernel_invokes_7_psy", \
+"invoke_0-ru_code-r0", 30, 3)
+ CALL extract_psy_data % PreDeclareVariable("a_data", a_data)
+ CALL extract_psy_data % PreDeclareVariable("b_data", b_data)
+ CALL extract_psy_data % PreDeclareVariable("basis_w0_qr", basis_w0_qr)
+ CALL extract_psy_data % PreDeclareVariable("basis_w2_qr", basis_w2_qr)
+ CALL extract_psy_data % PreDeclareVariable("basis_w3_qr", basis_w3_qr)
+ CALL extract_psy_data % PreDeclareVariable("c_data", c_data)
+ CALL extract_psy_data % PreDeclareVariable("cmap", cmap)
+ CALL extract_psy_data % PreDeclareVariable("diff_basis_w0_qr", \
+diff_basis_w0_qr)
+ CALL extract_psy_data % PreDeclareVariable("diff_basis_w2_qr", \
+diff_basis_w2_qr)
+ CALL extract_psy_data % PreDeclareVariable("e", e)
+ CALL extract_psy_data % PreDeclareVariable("istp", istp)
+ CALL extract_psy_data % PreDeclareVariable("loop7_start", loop7_start)
+ CALL extract_psy_data % PreDeclareVariable("loop7_stop", loop7_stop)
+ CALL extract_psy_data % PreDeclareVariable("map_w0", map_w0)
+ CALL extract_psy_data % PreDeclareVariable("map_w2", map_w2)
+ CALL extract_psy_data % PreDeclareVariable("map_w3", map_w3)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w0", ndf_w0)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w3", ndf_w3)
+ CALL extract_psy_data % PreDeclareVariable("nlayers_b", nlayers_b)
+ CALL extract_psy_data % PreDeclareVariable("np_xy_qr", np_xy_qr)
+ CALL extract_psy_data % PreDeclareVariable("np_z_qr", np_z_qr)
+ CALL extract_psy_data % PreDeclareVariable("rdt", rdt)
+ CALL extract_psy_data % PreDeclareVariable("undf_w0", undf_w0)
+ CALL extract_psy_data % PreDeclareVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % PreDeclareVariable("undf_w3", undf_w3)
+ CALL extract_psy_data % PreDeclareVariable("weights_xy_qr", weights_xy_qr)
+ CALL extract_psy_data % PreDeclareVariable("weights_z_qr", weights_z_qr)
+ CALL extract_psy_data % PreDeclareVariable("cell", cell)
+ CALL extract_psy_data % PreDeclareVariable("colour", colour)
+ CALL extract_psy_data % PreDeclareVariable("b_data_post", b_data)
+ CALL extract_psy_data % PreDeclareVariable("cell_post", cell)
+ CALL extract_psy_data % PreDeclareVariable("colour_post", colour)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("a_data", a_data)
+ CALL extract_psy_data % ProvideVariable("b_data", b_data)
+ CALL extract_psy_data % ProvideVariable("basis_w0_qr", basis_w0_qr)
+ CALL extract_psy_data % ProvideVariable("basis_w2_qr", basis_w2_qr)
+ CALL extract_psy_data % ProvideVariable("basis_w3_qr", basis_w3_qr)
+ CALL extract_psy_data % ProvideVariable("c_data", c_data)
+ CALL extract_psy_data % ProvideVariable("cmap", cmap)
+ CALL extract_psy_data % ProvideVariable("diff_basis_w0_qr", \
+diff_basis_w0_qr)
+ CALL extract_psy_data % ProvideVariable("diff_basis_w2_qr", \
+diff_basis_w2_qr)
+ CALL extract_psy_data % ProvideVariable("e", e)
+ CALL extract_psy_data % ProvideVariable("istp", istp)
+ CALL extract_psy_data % ProvideVariable("loop7_start", loop7_start)
+ CALL extract_psy_data % ProvideVariable("loop7_stop", loop7_stop)
+ CALL extract_psy_data % ProvideVariable("map_w0", map_w0)
+ CALL extract_psy_data % ProvideVariable("map_w2", map_w2)
+ CALL extract_psy_data % ProvideVariable("map_w3", map_w3)
+ CALL extract_psy_data % ProvideVariable("ndf_w0", ndf_w0)
+ CALL extract_psy_data % ProvideVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % ProvideVariable("ndf_w3", ndf_w3)
+ CALL extract_psy_data % ProvideVariable("nlayers_b", nlayers_b)
+ CALL extract_psy_data % ProvideVariable("np_xy_qr", np_xy_qr)
+ CALL extract_psy_data % ProvideVariable("np_z_qr", np_z_qr)
+ CALL extract_psy_data % ProvideVariable("rdt", rdt)
+ CALL extract_psy_data % ProvideVariable("undf_w0", undf_w0)
+ CALL extract_psy_data % ProvideVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % ProvideVariable("undf_w3", undf_w3)
+ CALL extract_psy_data % ProvideVariable("weights_xy_qr", weights_xy_qr)
+ CALL extract_psy_data % ProvideVariable("weights_z_qr", weights_z_qr)
+ CALL extract_psy_data % ProvideVariable("cell", cell)
+ CALL extract_psy_data % ProvideVariable("colour", colour)
+ CALL extract_psy_data % PreEnd
+ do colour = loop4_start, loop4_stop, 1
+ !$omp parallel do default(shared), private(cell), schedule(static)
+ do cell = loop5_start, last_edge_cell_all_colours(colour), 1
+ call ru_code(nlayers_b, b_data, a_data, istp, rdt, """
"c_data, e_1_data, e_2_data, "
"e_3_data, ndf_w2, undf_w2, "
"map_w2(:,cmap(colour,cell)), "
@@ -793,16 +772,15 @@ def test_extract_colouring_omp_dynamo0p3():
"map_w3(:,cmap(colour,cell)), basis_w3_qr, ndf_w0, undf_w0, "
"map_w0(:,cmap(colour,cell)), basis_w0_qr, diff_basis_w0_qr, "
"""np_xy_qr, np_z_qr, weights_xy_qr, weights_z_qr)
- END DO
- !$omp end parallel do
- END DO
- CALL extract_psy_data%PostStart
- CALL extract_psy_data%ProvideVariable("b_data_post", b_data)
- CALL extract_psy_data%ProvideVariable("cell_post", cell)
- CALL extract_psy_data%ProvideVariable("colour_post", colour)
- CALL extract_psy_data%PostEnd
- !
- ! ExtractEnd""")
+ enddo
+ !$omp end parallel do
+ enddo
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("b_data_post", b_data)
+ CALL extract_psy_data % ProvideVariable("cell_post", cell)
+ CALL extract_psy_data % ProvideVariable("colour_post", colour)
+ CALL extract_psy_data % PostEnd
+ """)
assert output in code
# TODO #706: Compilation for LFRic extraction not supported yet.
diff --git a/src/psyclone/tests/domain/lfric/transformations/lfric_haloex_test.py b/src/psyclone/tests/domain/lfric/transformations/lfric_haloex_test.py
index 2e5094d1d6..65b583fb6f 100644
--- a/src/psyclone/tests/domain/lfric/transformations/lfric_haloex_test.py
+++ b/src/psyclone/tests/domain/lfric/transformations/lfric_haloex_test.py
@@ -446,7 +446,7 @@ def test_write_cont_dirty(tmpdir, monkeypatch, annexed):
assert len(hexchs) == 0
# The field that is written to should be marked as dirty.
code = str(psy.gen)
- assert "CALL f1_proxy%set_dirty()\n" in code
+ assert "call f1_proxy%set_dirty()\n" in code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -633,9 +633,9 @@ def test_stencil_then_w3_read(tmpdir):
assert f4_hex.field.name == "f4"
result = str(psy.gen)
- assert (" IF (f4_proxy%is_dirty(depth=extent)) THEN\n"
- " CALL f4_proxy%halo_exchange(depth=extent)\n"
- " END IF" in result)
+ assert (" if (f4_proxy%is_dirty(depth=extent)) then\n"
+ " call f4_proxy%halo_exchange(depth=extent)\n"
+ " end if" in result)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -667,4 +667,4 @@ def test_stencil_with_redundant_comp_trans(monkeypatch, tmpdir, annexed):
# redundant computation.
for fidx in range(2, 5):
assert f'''if (f{fidx}_proxy%is_dirty(depth=f{fidx}_extent + 2)) then
- call f{fidx}_proxy%halo_exchange(depth=f{fidx}_extent + 2)''' in result
+ call f{fidx}_proxy%halo_exchange(depth=f{fidx}_extent + 2)''' in result
diff --git a/src/psyclone/tests/domain/lfric/transformations/raise_psyir_2_lfric_alg_trans_test.py b/src/psyclone/tests/domain/lfric/transformations/raise_psyir_2_lfric_alg_trans_test.py
index d409601b67..63c2700646 100644
--- a/src/psyclone/tests/domain/lfric/transformations/raise_psyir_2_lfric_alg_trans_test.py
+++ b/src/psyclone/tests/domain/lfric/transformations/raise_psyir_2_lfric_alg_trans_test.py
@@ -217,7 +217,7 @@ def test_arg_declaration_error(fortran_reader):
code = (
"subroutine setval_c()\n"
" use builtins\n"
- " use constants_mod, only: r_def\n"
+ " use constants_mod\n"
" use field_mod, only : field_type\n"
" type(field_type) :: field\n"
" real(kind=r_def) :: value\n"
diff --git a/src/psyclone/tests/domain/nemo/transformations/acc_update_test.py b/src/psyclone/tests/domain/nemo/transformations/acc_update_test.py
index 1eb25dabf0..bc6f6cc843 100644
--- a/src/psyclone/tests/domain/nemo/transformations/acc_update_test.py
+++ b/src/psyclone/tests/domain/nemo/transformations/acc_update_test.py
@@ -320,7 +320,7 @@ def test_codeblock(fortran_reader, fortran_writer):
acc_update.apply(schedule)
assert isinstance(schedule[1], CodeBlock)
code = fortran_writer(schedule)
- assert (''' !$acc update if_present host(jpi,jpj,jpk,tmask)\n'''
+ assert (''' !$acc update if_present host(jpi,jpj,jpk,tmask)\n\n'''
''' ! PSyclone CodeBlock (unsupported code) reason:\n'''
''' ! - Unsupported statement: Open_Stmt\n'''
''' ! - Unsupported statement: Read_Stmt\n'''
diff --git a/src/psyclone/tests/dynamo0p3_basis_test.py b/src/psyclone/tests/dynamo0p3_basis_test.py
index 8d47e37323..204ffce3a3 100644
--- a/src/psyclone/tests/dynamo0p3_basis_test.py
+++ b/src/psyclone/tests/dynamo0p3_basis_test.py
@@ -45,13 +45,11 @@
from psyclone.configuration import Config
from psyclone.domain.lfric import LFRicConstants, LFRicKern, LFRicKernMetadata
from psyclone.dynamo0p3 import DynBasisFunctions
-from psyclone.f2pygen import ModuleGen
from psyclone.parse.algorithm import parse
from psyclone.parse.utils import ParseError
from psyclone.psyGen import PSyFactory
from psyclone.errors import GenerationError, InternalError
from psyclone.tests.lfric_build import LFRicBuild
-from psyclone.tests.utilities import print_diffs
# constants
@@ -188,115 +186,114 @@ def test_single_kern_eval(tmpdir):
_, invoke_info = parse(os.path.join(BASE_PATH, "6.1_eval_invoke.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
-
- assert LFRicBuild(tmpdir).code_compiles(psy)
+ code = str(psy.gen)
# Check module declarations
- expected_module_declns = (
- " USE constants_mod, ONLY: r_def, i_def\n"
- " USE field_mod, ONLY: field_type, field_proxy_type\n")
- assert expected_module_declns in gen_code
+ assert "use constants_mod\n" in code
+ assert "use field_mod, only : field_proxy_type, field_type" in code
# Check subroutine declarations
- expected_decl = (
- " SUBROUTINE invoke_0_testkern_eval_type(f0, cmap)\n"
- " USE testkern_eval_mod, ONLY: testkern_eval_code\n"
- " USE function_space_mod, ONLY: BASIS, DIFF_BASIS\n"
- " TYPE(field_type), intent(in) :: f0, cmap\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) df_nodal, df_w0, df_w1\n"
- " REAL(KIND=r_def), allocatable :: basis_w0_on_w0(:,:,:), "
- "diff_basis_w1_on_w0(:,:,:)\n"
- " INTEGER(KIND=i_def) dim_w0, diff_dim_w1\n"
- " REAL(KIND=r_def), pointer :: nodes_w0(:,:) => null()\n"
- " INTEGER(KIND=i_def) nlayers_f0\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: "
- "cmap_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f0_data => null()\n"
- " TYPE(field_proxy_type) f0_proxy, cmap_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w0(:,:) => null(), "
- "map_w1(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w0, undf_w0, ndf_w1, undf_w1\n")
- assert expected_decl in gen_code
+ assert " subroutine invoke_0_testkern_eval_type(f0, cmap)" in code
+ assert " use testkern_eval_mod, only : testkern_eval_code" in code
+ assert " use function_space_mod, only : BASIS, DIFF_BASIS" in code
+ assert " type(field_type), intent(in) :: f0" in code
+ assert " type(field_type), intent(in) :: cmap" in code
+ assert " integer(kind=i_def) :: cell" in code
+ assert " integer(kind=i_def) :: loop0_start" in code
+ assert " integer(kind=i_def) :: loop0_stop" in code
+ assert " integer(kind=i_def) :: df_nodal" in code
+ assert " integer(kind=i_def) :: df_w0" in code
+ assert " integer(kind=i_def) :: df_w1" in code
+ assert (" real(kind=r_def), allocatable :: basis_w0_on_w0(:,:,:)"
+ in code)
+ assert (" real(kind=r_def), allocatable :: diff_basis_w1_on_w0(:,:,:)"
+ in code)
+ assert " integer(kind=i_def) :: dim_w0" in code
+ assert " integer(kind=i_def) :: diff_dim_w1" in code
+ assert (" real(kind=r_def), pointer :: nodes_w0(:,:) => null()"
+ in code)
+ assert " integer(kind=i_def) :: nlayers_f0" in code
+ assert ("real(kind=r_def), pointer, dimension(:) :: cmap_data => null()"
+ in code)
+ assert (" real(kind=r_def), pointer, dimension(:) :: f0_data => null()"
+ in code)
+ assert " type(field_proxy_type) :: f0_proxy" in code
+ assert " type(field_proxy_type) :: cmap_proxy" in code
+ assert (" integer(kind=i_def), pointer :: map_w0(:,:) => null()"
+ in code)
+ assert (" integer(kind=i_def), pointer :: map_w1(:,:) => null()"
+ in code)
+ assert " integer(kind=i_def) :: ndf_w0" in code
+ assert " integer(kind=i_def) :: undf_w0" in code
+ assert " integer(kind=i_def) :: ndf_w1" in code
+ assert " integer(kind=i_def) :: undf_w1" in code
# Second, check the executable statements
expected_code = (
- " !\n"
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f0_proxy = f0%get_proxy()\n"
- " f0_data => f0_proxy%data\n"
- " cmap_proxy = cmap%get_proxy()\n"
- " cmap_data => cmap_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f0 = f0_proxy%vspace%get_nlayers()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w0 => f0_proxy%vspace%get_whole_dofmap()\n"
- " map_w1 => cmap_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w0\n"
- " !\n"
- " ndf_w0 = f0_proxy%vspace%get_ndf()\n"
- " undf_w0 = f0_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = cmap_proxy%vspace%get_ndf()\n"
- " undf_w1 = cmap_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise evaluator-related quantities for the target "
+ "\n"
+ " ! Initialise field and/or operator proxies\n"
+ " f0_proxy = f0%get_proxy()\n"
+ " f0_data => f0_proxy%data\n"
+ " cmap_proxy = cmap%get_proxy()\n"
+ " cmap_data => cmap_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f0 = f0_proxy%vspace%get_nlayers()\n"
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_w0 => f0_proxy%vspace%get_whole_dofmap()\n"
+ " map_w1 => cmap_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for w0\n"
+ " ndf_w0 = f0_proxy%vspace%get_ndf()\n"
+ " undf_w0 = f0_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = cmap_proxy%vspace%get_ndf()\n"
+ " undf_w1 = cmap_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise evaluator-related quantities for the target "
"function spaces\n"
- " !\n"
- " nodes_w0 => f0_proxy%vspace%get_nodes()\n"
- " !\n"
- " ! Allocate basis/diff-basis arrays\n"
- " !\n"
- " dim_w0 = f0_proxy%vspace%get_dim_space()\n"
- " diff_dim_w1 = cmap_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (basis_w0_on_w0(dim_w0, ndf_w0, ndf_w0))\n"
- " ALLOCATE (diff_basis_w1_on_w0(diff_dim_w1, ndf_w1, ndf_w0))\n"
- " !\n"
- " ! Compute basis/diff-basis arrays\n"
- " !\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w0=1,ndf_w0\n"
- " basis_w0_on_w0(:,df_w0,df_nodal) = "
- "f0_proxy%vspace%call_function(BASIS,df_w0,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w1=1,ndf_w1\n"
- " diff_basis_w1_on_w0(:,df_w1,df_nodal) = cmap_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w1,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = f0_proxy%vspace%get_ncell()\n"
- " !\n"
- " ! Call our kernels\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_eval_code(nlayers_f0, f0_data, "
+ " nodes_w0 => f0_proxy%vspace%get_nodes()\n"
+ "\n"
+ " ! Allocate basis/diff-basis arrays\n"
+ " dim_w0 = f0_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w1 = cmap_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(basis_w0_on_w0(dim_w0,ndf_w0,ndf_w0))\n"
+ " ALLOCATE(diff_basis_w1_on_w0(diff_dim_w1,ndf_w1,ndf_w0))\n"
+ "\n"
+ " ! Compute basis/diff-basis arrays\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w0 = 1, ndf_w0, 1\n"
+ " basis_w0_on_w0(:,df_w0,df_nodal) = "
+ "f0_proxy%vspace%call_function(BASIS, df_w0, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w1 = 1, ndf_w1, 1\n"
+ " diff_basis_w1_on_w0(:,df_w1,df_nodal) = cmap_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w1, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = f0_proxy%vspace%get_ncell()\n"
+ "\n"
+ " ! Call kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_eval_code(nlayers_f0, f0_data, "
"cmap_data, ndf_w0, undf_w0, map_w0(:,cell), basis_w0_on_w0, "
"ndf_w1, undf_w1, map_w1(:,cell), diff_basis_w1_on_w0)\n"
- " END DO\n"
- " !\n"
+ " enddo\n"
)
- assert expected_code in gen_code
+ assert expected_code in code
dealloc_code = (
- " DEALLOCATE (basis_w0_on_w0, diff_basis_w1_on_w0)\n"
- " !\n"
- " END SUBROUTINE invoke_0_testkern_eval_type\n"
+ " DEALLOCATE(basis_w0_on_w0, diff_basis_w1_on_w0)\n"
+ "\n"
+ " end subroutine invoke_0_testkern_eval_type\n"
)
- assert dealloc_code in gen_code
+ assert dealloc_code in code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_single_kern_eval_op(tmpdir):
@@ -306,70 +303,73 @@ def test_single_kern_eval_op(tmpdir):
_, invoke_info = parse(os.path.join(BASE_PATH, "6.1.1_eval_op_invoke.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
-
- assert LFRicBuild(tmpdir).code_compiles(psy)
+ code = str(psy.gen)
# Kernel writes to an operator, the 'to' space of which is W0. Kernel
# requires basis on W2 ('from'-space of operator) and diff-basis on
# W3 (space of the field).
- decln_output = (
- " USE function_space_mod, ONLY: BASIS, DIFF_BASIS\n"
- " TYPE(field_type), intent(in) :: f1\n"
- " TYPE(operator_type), intent(in) :: op1\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) df_nodal, df_w2, df_w3\n"
- " REAL(KIND=r_def), allocatable :: basis_w2_on_w0(:,:,:), "
- "diff_basis_w3_on_w0(:,:,:)\n"
- " INTEGER(KIND=i_def) dim_w2, diff_dim_w3\n"
- " REAL(KIND=r_def), pointer :: nodes_w0(:,:) => null()\n"
- " INTEGER(KIND=i_def) nlayers_op1\n"
- " REAL(KIND=r_def), pointer, dimension(:,:,:) :: "
- "op1_local_stencil => null()\n"
- " TYPE(operator_proxy_type) op1_proxy\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w0, ndf_w2, ndf_w3, undf_w3\n")
- assert decln_output in gen_code
+ assert "use function_space_mod, only : BASIS, DIFF_BASIS" in code
+ assert "type(field_type), intent(in) :: f1" in code
+ assert "type(operator_type), intent(in) :: op1" in code
+ assert "integer(kind=i_def) :: cell" in code
+ assert "integer(kind=i_def) :: loop0_start" in code
+ assert "integer(kind=i_def) :: loop0_stop" in code
+ assert "integer(kind=i_def) :: df_nodal" in code
+ assert "integer(kind=i_def) :: df_w2" in code
+ assert "integer(kind=i_def) :: df_w3" in code
+ assert "real(kind=r_def), allocatable :: basis_w2_on_w0(:,:,:)" in code
+ assert ("real(kind=r_def), allocatable :: diff_basis_w3_on_w0(:,:,:)"
+ in code)
+ assert "integer(kind=i_def) :: dim_w2" in code
+ assert "integer(kind=i_def) :: diff_dim_w3" in code
+ assert "real(kind=r_def), pointer :: nodes_w0(:,:) => null()" in code
+ assert "integer(kind=i_def) :: nlayers_op1" in code
+ assert ("real(kind=r_def), pointer, dimension(:,:,:) :: "
+ "op1_local_stencil => null()" in code)
+ assert "type(operator_proxy_type) :: op1_proxy" in code
+ assert ("real(kind=r_def), pointer, dimension(:) :: f1_data => null()"
+ in code)
+ assert "type(field_proxy_type) :: f1_proxy" in code
+ assert "integer(kind=i_def), pointer :: map_w3(:,:) => null()" in code
+ assert "integer(kind=i_def) :: ndf_w0" in code
+ assert "integer(kind=i_def) :: ndf_w2" in code
+ assert "integer(kind=i_def) :: ndf_w3" in code
+ assert "integer(kind=i_def) :: undf_w3" in code
init_output = (
- " nodes_w0 => op1_proxy%fs_to%get_nodes()\n"
- " !\n"
- " ! Allocate basis/diff-basis arrays\n"
- " !\n"
- " dim_w2 = op1_proxy%fs_from%get_dim_space()\n"
- " diff_dim_w3 = f1_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (basis_w2_on_w0(dim_w2, ndf_w2, ndf_w0))\n"
- " ALLOCATE (diff_basis_w3_on_w0(diff_dim_w3, ndf_w3, ndf_w0))\n"
- " !\n"
- " ! Compute basis/diff-basis arrays\n"
- " !\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w2=1,ndf_w2\n"
- " basis_w2_on_w0(:,df_w2,df_nodal) = op1_proxy%fs_from%"
- "call_function(BASIS,df_w2,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w3=1,ndf_w3\n"
- " diff_basis_w3_on_w0(:,df_w3,df_nodal) = f1_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w3,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
+ " nodes_w0 => op1_proxy%fs_to%get_nodes()\n"
+ "\n"
+ " ! Allocate basis/diff-basis arrays\n"
+ " dim_w2 = op1_proxy%fs_from%get_dim_space()\n"
+ " diff_dim_w3 = f1_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(basis_w2_on_w0(dim_w2,ndf_w2,ndf_w0))\n"
+ " ALLOCATE(diff_basis_w3_on_w0(diff_dim_w3,ndf_w3,ndf_w0))\n"
+ "\n"
+ " ! Compute basis/diff-basis arrays\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w2 = 1, ndf_w2, 1\n"
+ " basis_w2_on_w0(:,df_w2,df_nodal) = op1_proxy%fs_from%"
+ "call_function(BASIS, df_w2, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w3 = 1, ndf_w3, 1\n"
+ " diff_basis_w3_on_w0(:,df_w3,df_nodal) = f1_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w3, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
)
- assert init_output in gen_code
- assert "loop0_stop = op1_proxy%fs_from%get_ncell()\n" in gen_code
+ assert init_output in code
+ assert "loop0_stop = op1_proxy%fs_from%get_ncell()\n" in code
kern_call = (
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_eval_op_code(cell, nlayers_op1, "
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_eval_op_code(cell, nlayers_op1, "
"op1_proxy%ncell_3d, op1_local_stencil, f1_data, ndf_w0, ndf_w2, "
"basis_w2_on_w0, ndf_w3, undf_w3, map_w3(:,cell), "
"diff_basis_w3_on_w0)\n"
- " END DO\n")
- assert kern_call in gen_code
- dealloc = (" DEALLOCATE (basis_w2_on_w0, diff_basis_w3_on_w0)\n")
- assert dealloc in gen_code
+ " enddo\n")
+ assert kern_call in code
+ assert " DEALLOCATE(basis_w2_on_w0, diff_basis_w3_on_w0)\n" in code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_two_qr_same_shape(tmpdir):
@@ -379,148 +379,181 @@ def test_two_qr_same_shape(tmpdir):
"1.1.2_single_invoke_2qr.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
-
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
- expected_module_declns = (
- " USE constants_mod, ONLY: r_def, i_def\n"
- " USE field_mod, ONLY: field_type, field_proxy_type\n")
- assert expected_module_declns in gen_code
-
- expected_declns = (
- " SUBROUTINE invoke_0(f1, f2, m1, a, m2, istp, g1, g2, n1, b, "
- "n2, qr, qr2)\n"
- " USE testkern_qr_mod, ONLY: testkern_qr_code\n"
- " USE quadrature_xyoz_mod, ONLY: quadrature_xyoz_type, "
- "quadrature_xyoz_proxy_type\n"
- " USE function_space_mod, ONLY: BASIS, DIFF_BASIS\n"
- " REAL(KIND=r_def), intent(in) :: a, b\n"
- " INTEGER(KIND=i_def), intent(in) :: istp\n"
- " TYPE(field_type), intent(in) :: f1, f2, m1, m2, g1, g2, "
- "n1, n2\n"
- " TYPE(quadrature_xyoz_type), intent(in) :: qr, qr2\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop1_start, loop1_stop\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " REAL(KIND=r_def), allocatable :: basis_w1_qr(:,:,:,:), "
- "diff_basis_w2_qr(:,:,:,:), basis_w3_qr(:,:,:,:), "
- "diff_basis_w3_qr(:,:,:,:), basis_w1_qr2(:,:,:,:), "
- "diff_basis_w2_qr2(:,:,:,:), basis_w3_qr2(:,:,:,:), "
- "diff_basis_w3_qr2(:,:,:,:)\n"
- " INTEGER(KIND=i_def) dim_w1, diff_dim_w2, dim_w3, diff_dim_w3\n"
- " REAL(KIND=r_def), pointer :: weights_xy_qr2(:) => null(), "
- "weights_z_qr2(:) => null()\n"
- " INTEGER(KIND=i_def) np_xy_qr2, np_z_qr2\n"
- " REAL(KIND=r_def), pointer :: weights_xy_qr(:) => null(), "
- "weights_z_qr(:) => null()\n"
- " INTEGER(KIND=i_def) np_xy_qr, np_z_qr\n"
- " INTEGER(KIND=i_def) nlayers_f1, nlayers_g1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: n2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: n1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: g2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: g1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, f2_proxy, m1_proxy, "
- "m2_proxy, g1_proxy, g2_proxy, n1_proxy, n2_proxy\n"
- " TYPE(quadrature_xyoz_proxy_type) qr_proxy, qr2_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w1(:,:) => null(), "
- "map_w2(:,:) => null(), map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, "
- "ndf_w3, undf_w3\n"
- )
- assert expected_declns in gen_code
+ code = str(psy.gen)
+
+ assert "use constants_mod\n" in code
+ assert "use field_mod, only : field_proxy_type, field_type" in code
+
+ assert ("subroutine invoke_0(f1, f2, m1, a, m2, istp, g1, g2, n1, b, "
+ "n2, qr, qr2)" in code)
+ assert "use testkern_qr_mod, only : testkern_qr_code" in code
+ assert ("use quadrature_xyoz_mod, only : quadrature_xyoz_proxy_type, "
+ "quadrature_xyoz_type" in code)
+ assert "use function_space_mod, only : BASIS, DIFF_BASIS" in code
+ assert "real(kind=r_def), intent(in) :: a" in code
+ assert "real(kind=r_def), intent(in) :: b" in code
+ assert "integer(kind=i_def), intent(in) :: istp" in code
+ assert "type(field_type), intent(in) :: f1" in code
+ assert "type(field_type), intent(in) :: f2" in code
+ assert "type(field_type), intent(in) :: m1" in code
+ assert "type(field_type), intent(in) :: m2" in code
+ assert "type(field_type), intent(in) :: g1" in code
+ assert "type(field_type), intent(in) :: g2" in code
+ assert "type(field_type), intent(in) :: n1" in code
+ assert "type(field_type), intent(in) :: n2" in code
+ assert "type(quadrature_xyoz_type), intent(in) :: qr" in code
+ assert "type(quadrature_xyoz_type), intent(in) :: qr2" in code
+ assert "integer(kind=i_def) :: cell" in code
+ assert "integer(kind=i_def) :: loop1_start" in code
+ assert "integer(kind=i_def) :: loop1_stop" in code
+ assert "integer(kind=i_def) :: loop0_start" in code
+ assert "integer(kind=i_def) :: loop0_stop" in code
+ assert "real(kind=r_def), allocatable :: basis_w1_qr(:,:,:,:)" in code
+ assert ("real(kind=r_def), allocatable :: diff_basis_w2_qr(:,:,:,:)"
+ in code)
+ assert "real(kind=r_def), allocatable :: basis_w3_qr(:,:,:,:)" in code
+ assert ("real(kind=r_def), allocatable :: diff_basis_w3_qr(:,:,:,:)"
+ in code)
+ assert "real(kind=r_def), allocatable :: basis_w1_qr2(:,:,:,:)" in code
+ assert ("real(kind=r_def), allocatable :: diff_basis_w2_qr2(:,:,:,:)"
+ in code)
+ assert "real(kind=r_def), allocatable :: basis_w3_qr2(:,:,:,:)" in code
+ assert ("real(kind=r_def), allocatable :: diff_basis_w3_qr2(:,:,:,:)"
+ in code)
+ assert "integer(kind=i_def) :: dim_w1" in code
+ assert "integer(kind=i_def) :: diff_dim_w2" in code
+ assert "integer(kind=i_def) :: dim_w3" in code
+ assert "integer(kind=i_def) :: diff_dim_w3" in code
+ assert ("real(kind=r_def), pointer :: weights_xy_qr2(:) => null()"
+ in code)
+ assert ("real(kind=r_def), pointer :: weights_z_qr2(:) => null()"
+ in code)
+ assert "integer(kind=i_def) :: np_xy_qr2" in code
+ assert "integer(kind=i_def) :: np_z_qr2" in code
+ assert ("real(kind=r_def), pointer :: weights_xy_qr(:) => null()"
+ in code)
+ assert "real(kind=r_def), pointer :: weights_z_qr(:) => null()" in code
+ assert "integer(kind=i_def) :: np_xy_qr" in code
+ assert "integer(kind=i_def) :: np_z_qr" in code
+ assert "integer(kind=i_def) :: nlayers_f1" in code
+ assert "integer(kind=i_def) :: nlayers_g1" in code
+ assert ("real(kind=r_def), pointer, dimension(:) :: n2_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: n1_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: g2_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: g1_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: m2_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: m1_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: f2_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: f1_data => null()"
+ in code)
+ assert "type(field_proxy_type) :: f1_proxy" in code
+ assert "type(field_proxy_type) :: f2_proxy" in code
+ assert "type(field_proxy_type) :: m1_proxy" in code
+ assert "type(field_proxy_type) :: m2_proxy" in code
+ assert "type(field_proxy_type) :: g1_proxy" in code
+ assert "type(field_proxy_type) :: g2_proxy" in code
+ assert "type(field_proxy_type) :: n1_proxy" in code
+ assert "type(field_proxy_type) :: n2_proxy" in code
+ assert "type(quadrature_xyoz_proxy_type) :: qr_proxy" in code
+ assert "type(quadrature_xyoz_proxy_type) :: qr2_proxy" in code
+ assert "integer(kind=i_def), pointer :: map_w1(:,:) => null()" in code
+ assert "integer(kind=i_def), pointer :: map_w2(:,:) => null()" in code
+ assert "integer(kind=i_def), pointer :: map_w3(:,:) => null()" in code
+ assert "integer(kind=i_def) :: ndf_w1" in code
+ assert "integer(kind=i_def) :: undf_w1" in code
+ assert "integer(kind=i_def) :: ndf_w2" in code
+ assert "integer(kind=i_def) :: undf_w2" in code
+ assert "integer(kind=i_def) :: ndf_w3" in code
+ assert "integer(kind=i_def) :: undf_w3" in code
+ assert "integer(kind=i_def), pointer :: map_w1(:,:) => null()" in code
+ assert "integer(kind=i_def), pointer :: map_w2(:,:) => null()" in code
+ assert "integer(kind=i_def), pointer :: map_w3(:,:) => null()" in code
expected_code = (
- " !\n"
- " ! Look-up quadrature variables\n"
- " !\n"
- " qr_proxy = qr%get_quadrature_proxy()\n"
- " np_xy_qr = qr_proxy%np_xy\n"
- " np_z_qr = qr_proxy%np_z\n"
- " weights_xy_qr => qr_proxy%weights_xy\n"
- " weights_z_qr => qr_proxy%weights_z\n"
- " qr2_proxy = qr2%get_quadrature_proxy()\n"
- " np_xy_qr2 = qr2_proxy%np_xy\n"
- " np_z_qr2 = qr2_proxy%np_z\n"
- " weights_xy_qr2 => qr2_proxy%weights_xy\n"
- " weights_z_qr2 => qr2_proxy%weights_z\n"
- " !\n"
- " ! Allocate basis/diff-basis arrays\n"
- " !\n"
- " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
- " diff_dim_w2 = f2_proxy%vspace%get_dim_space_diff()\n"
- " dim_w3 = m2_proxy%vspace%get_dim_space()\n"
- " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (basis_w1_qr(dim_w1, ndf_w1, np_xy_qr, np_z_qr))\n"
- " ALLOCATE (diff_basis_w2_qr(diff_dim_w2, ndf_w2, np_xy_qr, "
+ " ! Look-up quadrature variables\n"
+ " qr_proxy = qr%get_quadrature_proxy()\n"
+ " np_xy_qr = qr_proxy%np_xy\n"
+ " np_z_qr = qr_proxy%np_z\n"
+ " weights_xy_qr => qr_proxy%weights_xy\n"
+ " weights_z_qr => qr_proxy%weights_z\n"
+ " qr2_proxy = qr2%get_quadrature_proxy()\n"
+ " np_xy_qr2 = qr2_proxy%np_xy\n"
+ " np_z_qr2 = qr2_proxy%np_z\n"
+ " weights_xy_qr2 => qr2_proxy%weights_xy\n"
+ " weights_z_qr2 => qr2_proxy%weights_z\n"
+ "\n"
+ " ! Allocate basis/diff-basis arrays\n"
+ " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w2 = f2_proxy%vspace%get_dim_space_diff()\n"
+ " dim_w3 = m2_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(basis_w1_qr(dim_w1,ndf_w1,np_xy_qr,np_z_qr))\n"
+ " ALLOCATE(diff_basis_w2_qr(diff_dim_w2,ndf_w2,np_xy_qr,"
"np_z_qr))\n"
- " ALLOCATE (basis_w3_qr(dim_w3, ndf_w3, np_xy_qr, np_z_qr))\n"
- " ALLOCATE (diff_basis_w3_qr(diff_dim_w3, ndf_w3, np_xy_qr, "
+ " ALLOCATE(basis_w3_qr(dim_w3,ndf_w3,np_xy_qr,np_z_qr))\n"
+ " ALLOCATE(diff_basis_w3_qr(diff_dim_w3,ndf_w3,np_xy_qr,"
"np_z_qr))\n"
- " ALLOCATE (basis_w1_qr2(dim_w1, ndf_w1, np_xy_qr2, np_z_qr2))\n"
- " ALLOCATE (diff_basis_w2_qr2(diff_dim_w2, ndf_w2, np_xy_qr2, "
+ " ALLOCATE(basis_w1_qr2(dim_w1,ndf_w1,np_xy_qr2,np_z_qr2))\n"
+ " ALLOCATE(diff_basis_w2_qr2(diff_dim_w2,ndf_w2,np_xy_qr2,"
"np_z_qr2))\n"
- " ALLOCATE (basis_w3_qr2(dim_w3, ndf_w3, np_xy_qr2, np_z_qr2))\n"
- " ALLOCATE (diff_basis_w3_qr2(diff_dim_w3, ndf_w3, np_xy_qr2, "
+ " ALLOCATE(basis_w3_qr2(dim_w3,ndf_w3,np_xy_qr2,np_z_qr2))\n"
+ " ALLOCATE(diff_basis_w3_qr2(diff_dim_w3,ndf_w3,np_xy_qr2,"
"np_z_qr2))\n"
- " !\n"
- " ! Compute basis/diff-basis arrays\n"
- " !\n"
- " CALL qr%compute_function("
+ "\n"
+ " ! Compute basis/diff-basis arrays\n"
+ " call qr%compute_function("
"BASIS, f1_proxy%vspace, dim_w1, ndf_w1, basis_w1_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, "
+ " call qr%compute_function(DIFF_BASIS, "
"f2_proxy%vspace, diff_dim_w2, ndf_w2, diff_basis_w2_qr)\n"
- " CALL qr%compute_function("
+ " call qr%compute_function("
"BASIS, m2_proxy%vspace, dim_w3, ndf_w3, basis_w3_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, "
+ " call qr%compute_function(DIFF_BASIS, "
"m2_proxy%vspace, diff_dim_w3, ndf_w3, diff_basis_w3_qr)\n"
- " CALL qr2%compute_function("
+ " call qr2%compute_function("
"BASIS, g1_proxy%vspace, dim_w1, ndf_w1, basis_w1_qr2)\n"
- " CALL qr2%compute_function(DIFF_BASIS, "
+ " call qr2%compute_function(DIFF_BASIS, "
"g2_proxy%vspace, diff_dim_w2, ndf_w2, diff_basis_w2_qr2)\n"
- " CALL qr2%compute_function("
+ " call qr2%compute_function("
"BASIS, n2_proxy%vspace, dim_w3, ndf_w3, basis_w3_qr2)\n"
- " CALL qr2%compute_function(DIFF_BASIS, "
+ " call qr2%compute_function(DIFF_BASIS, "
"n2_proxy%vspace, diff_dim_w3, ndf_w3, diff_basis_w3_qr2)\n"
- " !\n")
- if expected_code not in gen_code:
- print_diffs(expected_code, gen_code)
- assert 0
- assert (" loop0_stop = f1_proxy%vspace%get_ncell()\n"
- " loop1_start = 1\n"
- " loop1_stop = g1_proxy%vspace%get_ncell()\n" in gen_code)
+ "\n")
+ assert expected_code in code
+ assert (" loop0_stop = f1_proxy%vspace%get_ncell()\n"
+ " loop1_start = 1\n"
+ " loop1_stop = g1_proxy%vspace%get_ncell()\n" in code)
expected_kern_call = (
- " ! Call our kernels\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_qr_code(nlayers_f1, f1_data, f2_data, "
+ " ! Call kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_qr_code(nlayers_f1, f1_data, f2_data, "
"m1_data, a, m2_data, istp, "
"ndf_w1, undf_w1, map_w1(:,cell), basis_w1_qr, "
"ndf_w2, undf_w2, map_w2(:,cell), diff_basis_w2_qr, "
"ndf_w3, undf_w3, map_w3(:,cell), basis_w3_qr, diff_basis_w3_qr, "
"np_xy_qr, np_z_qr, weights_xy_qr, weights_z_qr)\n"
- " END DO\n"
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL testkern_qr_code(nlayers_g1, g1_data, g2_data, "
+ " enddo\n"
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call testkern_qr_code(nlayers_g1, g1_data, g2_data, "
"n1_data, b, n2_data, istp, "
"ndf_w1, undf_w1, map_w1(:,cell), basis_w1_qr2, "
"ndf_w2, undf_w2, map_w2(:,cell), diff_basis_w2_qr2, "
"ndf_w3, undf_w3, map_w3(:,cell), basis_w3_qr2, diff_basis_w3_qr2, "
"np_xy_qr2, np_z_qr2, weights_xy_qr2, weights_z_qr2)\n"
- " END DO\n"
- " !\n"
- " ! Deallocate basis arrays\n"
- " !\n"
- " DEALLOCATE (basis_w1_qr, basis_w1_qr2, basis_w3_qr, "
+ " enddo\n"
+ "\n"
+ " ! Deallocate basis arrays\n"
+ " DEALLOCATE(basis_w1_qr, basis_w1_qr2, basis_w3_qr, "
"basis_w3_qr2, diff_basis_w2_qr, diff_basis_w2_qr2, diff_basis_w3_qr, "
"diff_basis_w3_qr2)\n"
)
- if expected_kern_call not in gen_code:
- print_diffs(expected_kern_call, gen_code)
- assert 0
+ assert expected_kern_call in code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_two_identical_qr(tmpdir):
@@ -530,69 +563,67 @@ def test_two_identical_qr(tmpdir):
os.path.join(BASE_PATH, "1.1.3_single_invoke_2_identical_qr.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
-
- assert LFRicBuild(tmpdir).code_compiles(psy)
+ code = str(psy.gen)
expected_init = (
- " ! Look-up quadrature variables\n"
- " !\n"
- " qr_proxy = qr%get_quadrature_proxy()\n"
- " np_xy_qr = qr_proxy%np_xy\n"
- " np_z_qr = qr_proxy%np_z\n"
- " weights_xy_qr => qr_proxy%weights_xy\n"
- " weights_z_qr => qr_proxy%weights_z\n"
- " !\n")
- assert expected_init in gen_code
+ " ! Look-up quadrature variables\n"
+ " qr_proxy = qr%get_quadrature_proxy()\n"
+ " np_xy_qr = qr_proxy%np_xy\n"
+ " np_z_qr = qr_proxy%np_z\n"
+ " weights_xy_qr => qr_proxy%weights_xy\n"
+ " weights_z_qr => qr_proxy%weights_z\n"
+ "\n")
+ assert expected_init in code
expected_alloc = (
- " !\n"
- " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
- " diff_dim_w2 = f2_proxy%vspace%get_dim_space_diff()\n"
- " dim_w3 = m2_proxy%vspace%get_dim_space()\n"
- " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (basis_w1_qr(dim_w1, ndf_w1, np_xy_qr, np_z_qr))\n"
- " ALLOCATE (diff_basis_w2_qr(diff_dim_w2, ndf_w2, np_xy_qr, "
+ "\n"
+ " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w2 = f2_proxy%vspace%get_dim_space_diff()\n"
+ " dim_w3 = m2_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(basis_w1_qr(dim_w1,ndf_w1,np_xy_qr,np_z_qr))\n"
+ " ALLOCATE(diff_basis_w2_qr(diff_dim_w2,ndf_w2,np_xy_qr,"
"np_z_qr))\n"
- " ALLOCATE (basis_w3_qr(dim_w3, ndf_w3, np_xy_qr, np_z_qr))\n"
- " ALLOCATE (diff_basis_w3_qr(diff_dim_w3, ndf_w3, np_xy_qr, "
+ " ALLOCATE(basis_w3_qr(dim_w3,ndf_w3,np_xy_qr,np_z_qr))\n"
+ " ALLOCATE(diff_basis_w3_qr(diff_dim_w3,ndf_w3,np_xy_qr,"
"np_z_qr))\n"
- " !\n")
- assert expected_alloc in gen_code
+ "\n")
+ assert expected_alloc in code
expected_basis_init = (
- " !\n"
- " CALL qr%compute_function(BASIS, f1_proxy%vspace, "
+ "\n"
+ " call qr%compute_function(BASIS, f1_proxy%vspace, "
"dim_w1, ndf_w1, basis_w1_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, f2_proxy%vspace, "
+ " call qr%compute_function(DIFF_BASIS, f2_proxy%vspace, "
"diff_dim_w2, ndf_w2, diff_basis_w2_qr)\n"
- " CALL qr%compute_function(BASIS, m2_proxy%vspace, "
+ " call qr%compute_function(BASIS, m2_proxy%vspace, "
"dim_w3, ndf_w3, basis_w3_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, m2_proxy%vspace, "
+ " call qr%compute_function(DIFF_BASIS, m2_proxy%vspace, "
"diff_dim_w3, ndf_w3, diff_basis_w3_qr)\n"
- " !\n")
- assert expected_basis_init in gen_code
- assert (" loop0_stop = f1_proxy%vspace%get_ncell()\n"
- " loop1_start = 1\n"
- " loop1_stop = g1_proxy%vspace%get_ncell()\n" in gen_code)
+ "\n")
+ assert expected_basis_init in code
+ assert (" loop0_stop = f1_proxy%vspace%get_ncell()\n"
+ " loop1_start = 1\n"
+ " loop1_stop = g1_proxy%vspace%get_ncell()\n" in code)
expected_kern_call = (
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_qr_code(nlayers_f1, f1_data, f2_data,"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_qr_code(nlayers_f1, f1_data, f2_data,"
" m1_data, a, m2_data, istp, ndf_w1, undf_w1, "
"map_w1(:,cell), basis_w1_qr, ndf_w2, undf_w2, map_w2(:,cell), "
"diff_basis_w2_qr, ndf_w3, undf_w3, map_w3(:,cell), basis_w3_qr, "
"diff_basis_w3_qr, np_xy_qr, np_z_qr, weights_xy_qr, weights_z_qr)\n"
- " END DO\n"
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL testkern_qr_code(nlayers_g1, g1_data, g2_data, "
+ " enddo\n"
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call testkern_qr_code(nlayers_g1, g1_data, g2_data, "
"n1_data, b, n2_data, istp, ndf_w1, undf_w1, "
"map_w1(:,cell), basis_w1_qr, ndf_w2, undf_w2, map_w2(:,cell), "
"diff_basis_w2_qr, ndf_w3, undf_w3, map_w3(:,cell), basis_w3_qr, "
"diff_basis_w3_qr, np_xy_qr, np_z_qr, weights_xy_qr, weights_z_qr)\n"
- " END DO\n")
- assert expected_kern_call in gen_code
+ " enddo\n")
+ assert expected_kern_call in code
expected_dealloc = (
- "DEALLOCATE (basis_w1_qr, basis_w3_qr, diff_basis_w2_qr, "
+ "DEALLOCATE(basis_w1_qr, basis_w3_qr, diff_basis_w2_qr, "
"diff_basis_w3_qr)")
- assert expected_dealloc in gen_code
+ assert expected_dealloc in code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_two_qr_different_shapes(tmpdir):
@@ -602,36 +633,35 @@ def test_two_qr_different_shapes(tmpdir):
"1.1.8_single_invoke_2qr_shapes.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
+ code = str(psy.gen)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
- assert "TYPE(quadrature_face_proxy_type) qrf_proxy" in gen_code
- assert "TYPE(quadrature_xyoz_proxy_type) qr_proxy" in gen_code
+ assert "type(quadrature_face_proxy_type) :: qrf_proxy" in code
+ assert "type(quadrature_xyoz_proxy_type) :: qr_proxy" in code
- assert "qr_proxy = qr%get_quadrature_proxy()" in gen_code
- assert "np_xy_qr = qr_proxy%np_xy" in gen_code
- assert "np_z_qr = qr_proxy%np_z" in gen_code
- assert "weights_xy_qr => qr_proxy%weights_xy" in gen_code
- assert "weights_z_qr => qr_proxy%weights_z" in gen_code
+ assert "qr_proxy = qr%get_quadrature_proxy()" in code
+ assert "np_xy_qr = qr_proxy%np_xy" in code
+ assert "np_z_qr = qr_proxy%np_z" in code
+ assert "weights_xy_qr => qr_proxy%weights_xy" in code
+ assert "weights_z_qr => qr_proxy%weights_z" in code
- assert "qrf_proxy = qrf%get_quadrature_proxy()" in gen_code
- assert "np_xyz_qrf = qrf_proxy%np_xyz" in gen_code
- assert "nfaces_qrf = qrf_proxy%nfaces" in gen_code
- assert "weights_xyz_qrf => qrf_proxy%weights_xyz" in gen_code
+ assert "qrf_proxy = qrf%get_quadrature_proxy()" in code
+ assert "np_xyz_qrf = qrf_proxy%np_xyz" in code
+ assert "nfaces_qrf = qrf_proxy%nfaces" in code
+ assert "weights_xyz_qrf => qrf_proxy%weights_xyz" in code
- assert ("CALL testkern_qr_code(nlayers_f1, f1_data, f2_data, "
+ assert ("call testkern_qr_code(nlayers_f1, f1_data, f2_data, "
"m1_data, a, m2_data, istp, ndf_w1, undf_w1, "
"map_w1(:,cell), basis_w1_qr, ndf_w2, undf_w2, map_w2(:,cell), "
"diff_basis_w2_qr, ndf_w3, undf_w3, map_w3(:,cell), basis_w3_qr, "
"diff_basis_w3_qr, np_xy_qr, np_z_qr, weights_xy_qr, weights_z_qr)"
- in gen_code)
- assert ("CALL testkern_qr_faces_code(nlayers_f1, f1_data, "
+ in code)
+ assert ("call testkern_qr_faces_code(nlayers_f1, f1_data, "
"f2_data, m1_data, m2_data, ndf_w1, undf_w1, "
"map_w1(:,cell), basis_w1_qrf, ndf_w2, undf_w2, map_w2(:,cell), "
"diff_basis_w2_qrf, ndf_w3, undf_w3, map_w3(:,cell), basis_w3_qrf,"
" diff_basis_w3_qrf, nfaces_qrf, np_xyz_qrf, weights_xyz_qrf)"
- in gen_code)
+ in code)
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_anyw2(tmpdir, dist_mem):
@@ -644,39 +674,34 @@ def test_anyw2(tmpdir, dist_mem):
distributed_memory=dist_mem).create(invoke_info)
generated_code = str(psy.gen)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
output = (
- " ! Initialise number of DoFs for any_w2\n"
- " !\n"
- " ndf_any_w2 = f1_proxy%vspace%get_ndf()\n"
- " undf_any_w2 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Look-up quadrature variables\n"
- " !\n"
- " qr_proxy = qr%get_quadrature_proxy()\n"
- " np_xy_qr = qr_proxy%np_xy\n"
- " np_z_qr = qr_proxy%np_z\n"
- " weights_xy_qr => qr_proxy%weights_xy\n"
- " weights_z_qr => qr_proxy%weights_z\n"
- " !\n"
- " ! Allocate basis/diff-basis arrays\n"
- " !\n"
- " dim_any_w2 = f1_proxy%vspace%get_dim_space()\n"
- " diff_dim_any_w2 = f1_proxy%vspace%"
+ " ! Initialise number of DoFs for any_w2\n"
+ " ndf_any_w2 = f1_proxy%vspace%get_ndf()\n"
+ " undf_any_w2 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Look-up quadrature variables\n"
+ " qr_proxy = qr%get_quadrature_proxy()\n"
+ " np_xy_qr = qr_proxy%np_xy\n"
+ " np_z_qr = qr_proxy%np_z\n"
+ " weights_xy_qr => qr_proxy%weights_xy\n"
+ " weights_z_qr => qr_proxy%weights_z\n"
+ "\n"
+ " ! Allocate basis/diff-basis arrays\n"
+ " dim_any_w2 = f1_proxy%vspace%get_dim_space()\n"
+ " diff_dim_any_w2 = f1_proxy%vspace%"
"get_dim_space_diff()\n"
- " ALLOCATE (basis_any_w2_qr(dim_any_w2, ndf_any_w2, "
- "np_xy_qr, np_z_qr))\n"
- " ALLOCATE (diff_basis_any_w2_qr(diff_dim_any_w2, "
- "ndf_any_w2, np_xy_qr, np_z_qr))\n"
- " !\n"
- " ! Compute basis/diff-basis arrays\n"
- " !\n"
- " CALL qr%compute_function(BASIS, f1_proxy%vspace, "
+ " ALLOCATE(basis_any_w2_qr(dim_any_w2,ndf_any_w2,"
+ "np_xy_qr,np_z_qr))\n"
+ " ALLOCATE(diff_basis_any_w2_qr(diff_dim_any_w2,"
+ "ndf_any_w2,np_xy_qr,np_z_qr))\n"
+ "\n"
+ " ! Compute basis/diff-basis arrays\n"
+ " call qr%compute_function(BASIS, f1_proxy%vspace, "
"dim_any_w2, ndf_any_w2, basis_any_w2_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, f1_proxy%vspace, "
+ " call qr%compute_function(DIFF_BASIS, f1_proxy%vspace, "
"diff_dim_any_w2, ndf_any_w2, diff_basis_any_w2_qr)")
assert output in generated_code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_qr_plus_eval(tmpdir):
@@ -685,135 +710,162 @@ def test_qr_plus_eval(tmpdir):
_, invoke_info = parse(os.path.join(BASE_PATH, "6.2_qr_eval_invoke.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
-
- assert LFRicBuild(tmpdir).code_compiles(psy)
+ code = str(psy.gen)
+
+ assert "use constants_mod\n" in code
+ assert "use field_mod, only : field_proxy_type, field_type" in code
+
+ assert "subroutine invoke_0(f0, f1, f2, m1, a, m2, istp, qr)" in code
+ assert "use testkern_qr_mod, only : testkern_qr_code" in code
+ assert "use testkern_eval_mod, only : testkern_eval_code" in code
+ assert ("use quadrature_xyoz_mod, only : quadrature_xyoz_proxy_type, "
+ "quadrature_xyoz_type") in code
+ assert "use function_space_mod, only : BASIS, DIFF_BASIS" in code
+ assert "real(kind=r_def), intent(in) :: a" in code
+ assert "integer(kind=i_def), intent(in) :: istp" in code
+ assert "type(field_type), intent(in) :: f0" in code
+ assert "type(field_type), intent(in) :: f1" in code
+ assert "type(field_type), intent(in) :: f2" in code
+ assert "type(field_type), intent(in) :: m1" in code
+ assert "type(field_type), intent(in) :: m2" in code
+ assert "type(quadrature_xyoz_type), intent(in) :: qr" in code
+ assert "integer(kind=i_def) :: cell" in code
+ assert "integer(kind=i_def) :: loop0_start" in code
+ assert "integer(kind=i_def) :: loop0_stop" in code
+ assert "integer(kind=i_def) :: loop1_start" in code
+ assert "integer(kind=i_def) :: loop1_stop" in code
+ assert "integer(kind=i_def) :: df_nodal" in code
+ assert "integer(kind=i_def) :: df_w0" in code
+ assert "integer(kind=i_def) :: df_w1" in code
+ assert "real(kind=r_def), allocatable :: basis_w0_on_w0(:,:,:)" in code
+ assert ("real(kind=r_def), allocatable :: diff_basis_w1_on_w0(:,:,:)"
+ in code)
+ assert "real(kind=r_def), allocatable :: basis_w1_qr(:,:,:,:)" in code
+ assert "real(kind=r_def), allocatable :: basis_w3_qr(:,:,:,:)" in code
+ assert ("real(kind=r_def), allocatable :: diff_basis_w2_qr(:,:,:,:)"
+ in code)
+ assert ("real(kind=r_def), allocatable :: diff_basis_w3_qr(:,:,:,:)"
+ in code)
+ assert "integer(kind=i_def) :: dim_w0" in code
+ assert "integer(kind=i_def) :: diff_dim_w1" in code
+ assert "integer(kind=i_def) :: dim_w1" in code
+ assert "integer(kind=i_def) :: diff_dim_w2" in code
+ assert "integer(kind=i_def) :: dim_w3" in code
+ assert "integer(kind=i_def) :: diff_dim_w3" in code
+ assert "real(kind=r_def), pointer :: nodes_w0(:,:) => null()" in code
+ assert ("real(kind=r_def), pointer :: weights_xy_qr(:) => null()"
+ in code)
+ assert "real(kind=r_def), pointer :: weights_z_qr(:) => null()" in code
+ assert "integer(kind=i_def) :: np_xy_qr" in code
+ assert "integer(kind=i_def) :: np_z_qr" in code
+ assert "integer(kind=i_def) :: nlayers_f0" in code
+ assert "integer(kind=i_def) :: nlayers_f1" in code
+ assert ("real(kind=r_def), pointer, dimension(:) :: m2_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: m1_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: f2_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: f1_data => null()"
+ in code)
+ assert ("real(kind=r_def), pointer, dimension(:) :: f0_data => null()"
+ in code)
+
+ assert "type(field_proxy_type) :: f0_proxy" in code
+ assert "type(field_proxy_type) :: f1_proxy" in code
+ assert "type(field_proxy_type) :: f2_proxy" in code
+ assert "type(field_proxy_type) :: m1_proxy" in code
+ assert "type(field_proxy_type) :: m2_proxy" in code
+ assert "type(quadrature_xyoz_proxy_type) :: qr_proxy" in code
+ assert "integer(kind=i_def), pointer :: map_w0(:,:) => null()" in code
+ assert "integer(kind=i_def), pointer :: map_w1(:,:) => null()" in code
+ assert "integer(kind=i_def), pointer :: map_w2(:,:) => null()" in code
+ assert "integer(kind=i_def), pointer :: map_w3(:,:) => null()" in code
+ assert "integer(kind=i_def) :: ndf_w0" in code
+ assert "integer(kind=i_def) :: undf_w0" in code
+ assert "integer(kind=i_def) :: ndf_w1" in code
+ assert "integer(kind=i_def) :: undf_w1" in code
+ assert "integer(kind=i_def) :: ndf_w2" in code
+ assert "integer(kind=i_def) :: undf_w2" in code
+ assert "integer(kind=i_def) :: ndf_w3" in code
+ assert "integer(kind=i_def) :: undf_w3" in code
- expected_module_declns = (
- " USE constants_mod, ONLY: r_def, i_def\n"
- " USE field_mod, ONLY: field_type, field_proxy_type\n")
- assert expected_module_declns in gen_code
-
- output_decls = (
- " SUBROUTINE invoke_0(f0, f1, f2, m1, a, m2, istp, qr)\n"
- " USE testkern_qr_mod, ONLY: testkern_qr_code\n"
- " USE testkern_eval_mod, ONLY: testkern_eval_code\n"
- " USE quadrature_xyoz_mod, ONLY: quadrature_xyoz_type, "
- "quadrature_xyoz_proxy_type\n"
- " USE function_space_mod, ONLY: BASIS, DIFF_BASIS\n"
- " REAL(KIND=r_def), intent(in) :: a\n"
- " INTEGER(KIND=i_def), intent(in) :: istp\n"
- " TYPE(field_type), intent(in) :: f0, f1, f2, m1, m2\n"
- " TYPE(quadrature_xyoz_type), intent(in) :: qr\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop1_start, loop1_stop\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) df_nodal, df_w0, df_w1\n"
- " REAL(KIND=r_def), allocatable :: basis_w0_on_w0(:,:,:), "
- "diff_basis_w1_on_w0(:,:,:), basis_w1_qr(:,:,:,:), "
- "diff_basis_w2_qr(:,:,:,:), basis_w3_qr(:,:,:,:), "
- "diff_basis_w3_qr(:,:,:,:)\n"
- " INTEGER(KIND=i_def) dim_w0, diff_dim_w1, dim_w1, "
- "diff_dim_w2, dim_w3, diff_dim_w3\n"
- " REAL(KIND=r_def), pointer :: nodes_w0(:,:) => null()\n"
- " REAL(KIND=r_def), pointer :: weights_xy_qr(:) => null(), "
- "weights_z_qr(:) => null()\n"
- " INTEGER(KIND=i_def) np_xy_qr, np_z_qr\n"
- " INTEGER(KIND=i_def) nlayers_f0, nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f0_data => null()\n"
-
- " TYPE(field_proxy_type) f0_proxy, f1_proxy, f2_proxy, "
- "m1_proxy, m2_proxy\n"
- " TYPE(quadrature_xyoz_proxy_type) qr_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w0(:,:) => null(), "
- "map_w1(:,:) => null(), map_w2(:,:) => null(), map_w3(:,:) => "
- "null()\n"
- " INTEGER(KIND=i_def) ndf_w0, undf_w0, ndf_w1, undf_w1, "
- "ndf_w2, undf_w2, ndf_w3, undf_w3\n")
- assert output_decls in gen_code
output_setup = (
- " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = m2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Look-up quadrature variables\n"
- " !\n"
- " qr_proxy = qr%get_quadrature_proxy()\n"
- " np_xy_qr = qr_proxy%np_xy\n"
- " np_z_qr = qr_proxy%np_z\n"
- " weights_xy_qr => qr_proxy%weights_xy\n"
- " weights_z_qr => qr_proxy%weights_z\n"
- " !\n"
- " ! Initialise evaluator-related quantities for the target "
+ " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = m2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Look-up quadrature variables\n"
+ " qr_proxy = qr%get_quadrature_proxy()\n"
+ " np_xy_qr = qr_proxy%np_xy\n"
+ " np_z_qr = qr_proxy%np_z\n"
+ " weights_xy_qr => qr_proxy%weights_xy\n"
+ " weights_z_qr => qr_proxy%weights_z\n"
+ "\n"
+ " ! Initialise evaluator-related quantities for the target "
"function spaces\n"
- " !\n"
- " nodes_w0 => f0_proxy%vspace%get_nodes()\n"
- " !\n"
- " ! Allocate basis/diff-basis arrays\n"
- " !\n"
- " dim_w0 = f0_proxy%vspace%get_dim_space()\n"
- " diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n"
- " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
- " diff_dim_w2 = f2_proxy%vspace%get_dim_space_diff()\n"
- " dim_w3 = m2_proxy%vspace%get_dim_space()\n"
- " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (basis_w0_on_w0(dim_w0, ndf_w0, ndf_w0))\n"
- " ALLOCATE (diff_basis_w1_on_w0(diff_dim_w1, ndf_w1, "
+ " nodes_w0 => f0_proxy%vspace%get_nodes()\n"
+ "\n"
+ " ! Allocate basis/diff-basis arrays\n"
+ " dim_w0 = f0_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n"
+ " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w2 = f2_proxy%vspace%get_dim_space_diff()\n"
+ " dim_w3 = m2_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(basis_w0_on_w0(dim_w0,ndf_w0,ndf_w0))\n"
+ " ALLOCATE(diff_basis_w1_on_w0(diff_dim_w1,ndf_w1,"
"ndf_w0))\n"
- " ALLOCATE (basis_w1_qr(dim_w1, ndf_w1, np_xy_qr, np_z_qr))\n"
- " ALLOCATE (diff_basis_w2_qr(diff_dim_w2, ndf_w2, np_xy_qr, "
+ " ALLOCATE(basis_w1_qr(dim_w1,ndf_w1,np_xy_qr,np_z_qr))\n"
+ " ALLOCATE(diff_basis_w2_qr(diff_dim_w2,ndf_w2,np_xy_qr,"
"np_z_qr))\n"
- " ALLOCATE (basis_w3_qr(dim_w3, ndf_w3, np_xy_qr, np_z_qr))\n"
- " ALLOCATE (diff_basis_w3_qr(diff_dim_w3, ndf_w3, np_xy_qr, "
+ " ALLOCATE(basis_w3_qr(dim_w3,ndf_w3,np_xy_qr,np_z_qr))\n"
+ " ALLOCATE(diff_basis_w3_qr(diff_dim_w3,ndf_w3,np_xy_qr,"
"np_z_qr))\n"
- " !\n"
- " ! Compute basis/diff-basis arrays\n"
- " !\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w0=1,ndf_w0\n"
- " basis_w0_on_w0(:,df_w0,df_nodal) = f0_proxy%vspace%"
- "call_function(BASIS,df_w0,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w1=1,ndf_w1\n"
- " diff_basis_w1_on_w0(:,df_w1,df_nodal) = f1_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w1,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " CALL qr%compute_function(BASIS, f1_proxy%vspace, "
+ "\n"
+ " ! Compute basis/diff-basis arrays\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w0 = 1, ndf_w0, 1\n"
+ " basis_w0_on_w0(:,df_w0,df_nodal) = f0_proxy%vspace%"
+ "call_function(BASIS, df_w0, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w1 = 1, ndf_w1, 1\n"
+ " diff_basis_w1_on_w0(:,df_w1,df_nodal) = f1_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w1, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " call qr%compute_function(BASIS, f1_proxy%vspace, "
"dim_w1, ndf_w1, basis_w1_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, f2_proxy%vspace, "
+ " call qr%compute_function(DIFF_BASIS, f2_proxy%vspace, "
"diff_dim_w2, ndf_w2, diff_basis_w2_qr)\n"
- " CALL qr%compute_function(BASIS, m2_proxy%vspace, "
+ " call qr%compute_function(BASIS, m2_proxy%vspace, "
"dim_w3, ndf_w3, basis_w3_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, m2_proxy%vspace, "
+ " call qr%compute_function(DIFF_BASIS, m2_proxy%vspace, "
"diff_dim_w3, ndf_w3, diff_basis_w3_qr)\n")
- assert output_setup in gen_code
- assert (" loop0_stop = f0_proxy%vspace%get_ncell()\n"
- " loop1_start = 1\n"
- " loop1_stop = f1_proxy%vspace%get_ncell()\n" in gen_code)
+ assert output_setup in code
+ assert (" loop0_stop = f0_proxy%vspace%get_ncell()\n"
+ " loop1_start = 1\n"
+ " loop1_stop = f1_proxy%vspace%get_ncell()\n" in code)
output_kern_call = (
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_eval_code(nlayers_f0, f0_data, "
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_eval_code(nlayers_f0, f0_data, "
"f1_data, ndf_w0, undf_w0, map_w0(:,cell), basis_w0_on_w0, "
"ndf_w1, undf_w1, map_w1(:,cell), diff_basis_w1_on_w0)\n"
- " END DO\n"
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL testkern_qr_code(nlayers_f1, f1_data, f2_data, "
+ " enddo\n"
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call testkern_qr_code(nlayers_f1, f1_data, f2_data, "
"m1_data, a, m2_data, istp, ndf_w1, undf_w1, "
"map_w1(:,cell), basis_w1_qr, ndf_w2, undf_w2, map_w2(:,cell), "
"diff_basis_w2_qr, ndf_w3, undf_w3, map_w3(:,cell), basis_w3_qr, "
"diff_basis_w3_qr, np_xy_qr, np_z_qr, weights_xy_qr, weights_z_qr)\n"
- " END DO\n")
- assert output_kern_call in gen_code
+ " enddo\n")
+ assert output_kern_call in code
output_dealloc = (
- " DEALLOCATE (basis_w0_on_w0, basis_w1_qr, basis_w3_qr, "
+ " DEALLOCATE(basis_w0_on_w0, basis_w1_qr, basis_w3_qr, "
"diff_basis_w1_on_w0, diff_basis_w2_qr, diff_basis_w3_qr)\n")
- assert output_dealloc in gen_code
+ assert output_dealloc in code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_two_eval_same_space(tmpdir):
@@ -823,62 +875,56 @@ def test_two_eval_same_space(tmpdir):
_, invoke_info = parse(os.path.join(BASE_PATH, "6.3_2eval_invoke.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
-
- assert LFRicBuild(tmpdir).code_compiles(psy)
+ code = str(psy.gen)
output_init = (
- " !\n"
- " ! Initialise evaluator-related quantities for the target "
+ "\n"
+ " ! Initialise evaluator-related quantities for the target "
"function spaces\n"
- " !\n"
- " nodes_w0 => f0_proxy%vspace%get_nodes()\n"
- " !\n"
- " ! Allocate basis/diff-basis arrays\n"
- " !\n"
- " dim_w0 = f0_proxy%vspace%get_dim_space()\n"
- " diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (basis_w0_on_w0(dim_w0, ndf_w0, ndf_w0))\n"
- " ALLOCATE (diff_basis_w1_on_w0(diff_dim_w1, ndf_w1, ndf_w0))\n")
- assert output_init in gen_code
+ " nodes_w0 => f0_proxy%vspace%get_nodes()\n"
+ "\n"
+ " ! Allocate basis/diff-basis arrays\n"
+ " dim_w0 = f0_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(basis_w0_on_w0(dim_w0,ndf_w0,ndf_w0))\n"
+ " ALLOCATE(diff_basis_w1_on_w0(diff_dim_w1,ndf_w1,ndf_w0))\n")
+ assert output_init in code
output_code = (
- " !\n"
- " ! Compute basis/diff-basis arrays\n"
- " !\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w0=1,ndf_w0\n"
- " basis_w0_on_w0(:,df_w0,df_nodal) = f0_proxy%vspace%"
- "call_function(BASIS,df_w0,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w1=1,ndf_w1\n"
- " diff_basis_w1_on_w0(:,df_w1,df_nodal) = f1_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w1,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = f0_proxy%vspace%get_ncell()\n"
- " loop1_start = 1\n"
- " loop1_stop = f2_proxy%vspace%get_ncell()\n"
- " !\n"
- " ! Call our kernels\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_eval_code(nlayers_f0, f0_data, "
+ "\n"
+ " ! Compute basis/diff-basis arrays\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w0 = 1, ndf_w0, 1\n"
+ " basis_w0_on_w0(:,df_w0,df_nodal) = f0_proxy%vspace%"
+ "call_function(BASIS, df_w0, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w1 = 1, ndf_w1, 1\n"
+ " diff_basis_w1_on_w0(:,df_w1,df_nodal) = f1_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w1, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = f0_proxy%vspace%get_ncell()\n"
+ " loop1_start = 1\n"
+ " loop1_stop = f2_proxy%vspace%get_ncell()\n"
+ "\n"
+ " ! Call kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_eval_code(nlayers_f0, f0_data, "
"f1_data, ndf_w0, undf_w0, map_w0(:,cell), basis_w0_on_w0, "
"ndf_w1, undf_w1, map_w1(:,cell), diff_basis_w1_on_w0)\n"
- " END DO\n"
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL testkern_eval_code(nlayers_f2, f2_data, "
+ " enddo\n"
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call testkern_eval_code(nlayers_f2, f2_data, "
"f3_data, ndf_w0, undf_w0, map_w0(:,cell), basis_w0_on_w0, "
"ndf_w1, undf_w1, map_w1(:,cell), diff_basis_w1_on_w0)\n"
- " END DO\n"
+ " enddo\n"
)
- assert output_code in gen_code
+ assert output_code in code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_two_eval_diff_space(tmpdir):
@@ -888,9 +934,7 @@ def test_two_eval_diff_space(tmpdir):
_, invoke_info = parse(os.path.join(BASE_PATH, "6.4_2eval_op_invoke.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
-
- assert LFRicBuild(tmpdir).code_compiles(psy)
+ code = str(psy.gen)
# The first kernel in the invoke (testkern_eval_type) requires basis and
# diff basis functions for the spaces of the first and second field
@@ -902,71 +946,67 @@ def test_two_eval_diff_space(tmpdir):
# arg we require basis functions on the nodal points of the 'to' space
# of that operator (W0 in this case).
expected_init = (
- " ! Initialise evaluator-related quantities for the target "
+ " ! Initialise evaluator-related quantities for the target "
"function spaces\n"
- " !\n"
- " nodes_w0 => f0_proxy%vspace%get_nodes()\n"
- " !\n"
- " ! Allocate basis/diff-basis arrays\n"
- " !\n"
- " dim_w0 = f0_proxy%vspace%get_dim_space()\n"
- " diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n"
- " dim_w2 = op1_proxy%fs_from%get_dim_space()\n"
- " diff_dim_w3 = f2_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (basis_w0_on_w0(dim_w0, ndf_w0, ndf_w0))\n"
- " ALLOCATE (diff_basis_w1_on_w0(diff_dim_w1, ndf_w1, ndf_w0))\n"
- " ALLOCATE (basis_w2_on_w0(dim_w2, ndf_w2, ndf_w0))\n"
- " ALLOCATE (diff_basis_w3_on_w0(diff_dim_w3, ndf_w3, ndf_w0))\n")
- assert expected_init in gen_code
+ " nodes_w0 => f0_proxy%vspace%get_nodes()\n"
+ "\n"
+ " ! Allocate basis/diff-basis arrays\n"
+ " dim_w0 = f0_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n"
+ " dim_w2 = op1_proxy%fs_from%get_dim_space()\n"
+ " diff_dim_w3 = f2_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(basis_w0_on_w0(dim_w0,ndf_w0,ndf_w0))\n"
+ " ALLOCATE(diff_basis_w1_on_w0(diff_dim_w1,ndf_w1,ndf_w0))\n"
+ " ALLOCATE(basis_w2_on_w0(dim_w2,ndf_w2,ndf_w0))\n"
+ " ALLOCATE(diff_basis_w3_on_w0(diff_dim_w3,ndf_w3,ndf_w0))\n")
+ assert expected_init in code
expected_code = (
- " ! Compute basis/diff-basis arrays\n"
- " !\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w0=1,ndf_w0\n"
- " basis_w0_on_w0(:,df_w0,df_nodal) = f0_proxy%vspace%"
- "call_function(BASIS,df_w0,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w1=1,ndf_w1\n"
- " diff_basis_w1_on_w0(:,df_w1,df_nodal) = f1_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w1,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w2=1,ndf_w2\n"
- " basis_w2_on_w0(:,df_w2,df_nodal) = op1_proxy%fs_from%"
- "call_function(BASIS,df_w2,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w3=1,ndf_w3\n"
- " diff_basis_w3_on_w0(:,df_w3,df_nodal) = f2_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w3,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = f0_proxy%vspace%get_ncell()\n"
- " loop1_start = 1\n"
- " loop1_stop = op1_proxy%fs_from%get_ncell()\n"
- " !\n"
- " ! Call our kernels\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_eval_code(nlayers_f0, f0_data, "
+ " ! Compute basis/diff-basis arrays\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w0 = 1, ndf_w0, 1\n"
+ " basis_w0_on_w0(:,df_w0,df_nodal) = f0_proxy%vspace%"
+ "call_function(BASIS, df_w0, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w1 = 1, ndf_w1, 1\n"
+ " diff_basis_w1_on_w0(:,df_w1,df_nodal) = f1_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w1, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w2 = 1, ndf_w2, 1\n"
+ " basis_w2_on_w0(:,df_w2,df_nodal) = op1_proxy%fs_from%"
+ "call_function(BASIS, df_w2, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w3 = 1, ndf_w3, 1\n"
+ " diff_basis_w3_on_w0(:,df_w3,df_nodal) = f2_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w3, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = f0_proxy%vspace%get_ncell()\n"
+ " loop1_start = 1\n"
+ " loop1_stop = op1_proxy%fs_from%get_ncell()\n"
+ "\n"
+ " ! Call kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_eval_code(nlayers_f0, f0_data, "
"f1_data, ndf_w0, undf_w0, map_w0(:,cell), basis_w0_on_w0, "
"ndf_w1, undf_w1, map_w1(:,cell), diff_basis_w1_on_w0)\n"
- " END DO\n"
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL testkern_eval_op_code(cell, nlayers_op1, "
+ " enddo\n"
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call testkern_eval_op_code(cell, nlayers_op1, "
"op1_proxy%ncell_3d, op1_local_stencil, f2_data, ndf_w0, ndf_w2, "
"basis_w2_on_w0, ndf_w3, undf_w3, map_w3(:,cell), "
"diff_basis_w3_on_w0)\n"
- " END DO\n")
- assert expected_code in gen_code
+ " enddo\n")
+ assert expected_code in code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_two_eval_same_var_same_space(tmpdir):
@@ -977,30 +1017,30 @@ def test_two_eval_same_var_same_space(tmpdir):
"6.7_2eval_same_var_invoke.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
-
- assert LFRicBuild(tmpdir).code_compiles(psy)
+ code = str(psy.gen)
# We should only get one set of basis and diff-basis functions in the
# generated code
- assert gen_code.count(
+ assert code.count(
"ndf_adspc1_f0 = f0_proxy%vspace%get_ndf()") == 1
- assert gen_code.count(
- " DO df_nodal=1,ndf_adspc1_f0\n"
- " DO df_w0=1,ndf_w0\n"
- " basis_w0_on_adspc1_f0(:,df_w0,df_nodal) = f1_proxy%vspace"
- "%call_function(BASIS,df_w0,nodes_adspc1_f0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n") == 1
- assert gen_code.count(
- " DO df_nodal=1,ndf_adspc1_f0\n"
- " DO df_w1=1,ndf_w1\n"
- " diff_basis_w1_on_adspc1_f0(:,df_w1,df_nodal) = f2_proxy"
- "%vspace%call_function(DIFF_BASIS,df_w1,nodes_adspc1_f0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n") == 1
- assert gen_code.count(
- "DEALLOCATE (basis_w0_on_adspc1_f0, diff_basis_w1_on_adspc1_f0)") == 1
+ assert code.count(
+ " do df_nodal = 1, ndf_adspc1_f0, 1\n"
+ " do df_w0 = 1, ndf_w0, 1\n"
+ " basis_w0_on_adspc1_f0(:,df_w0,df_nodal) = f1_proxy%vspace"
+ "%call_function(BASIS, df_w0, nodes_adspc1_f0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n") == 1
+ assert code.count(
+ " do df_nodal = 1, ndf_adspc1_f0, 1\n"
+ " do df_w1 = 1, ndf_w1, 1\n"
+ " diff_basis_w1_on_adspc1_f0(:,df_w1,df_nodal) = f2_proxy"
+ "%vspace%call_function(DIFF_BASIS, df_w1, nodes_adspc1_f0(:,"
+ "df_nodal))\n"
+ " enddo\n"
+ " enddo\n") == 1
+ assert code.count(
+ "DEALLOCATE(basis_w0_on_adspc1_f0, diff_basis_w1_on_adspc1_f0)") == 1
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_two_eval_op_to_space(tmpdir):
@@ -1013,7 +1053,7 @@ def test_two_eval_op_to_space(tmpdir):
"6.5_2eval_op_to_invoke.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
+ code = str(psy.gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -1021,88 +1061,86 @@ def test_two_eval_op_to_space(tmpdir):
# testkern_eval requires basis fns on W0 and eval_op_to requires basis
# fns on W2 which is the 'to' space of the operator arg
init_code = (
- " ndf_w2 = op1_proxy%fs_to%get_ndf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = f2_proxy%vspace%get_ndf()\n"
- " undf_w3 = f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise evaluator-related quantities for the target"
+ " ndf_w2 = op1_proxy%fs_to%get_ndf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = f2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise evaluator-related quantities for the target"
" function spaces\n"
- " !\n"
- " nodes_w0 => f0_proxy%vspace%get_nodes()\n"
- " nodes_w3 => f2_proxy%vspace%get_nodes()\n"
+ " nodes_w0 => f0_proxy%vspace%get_nodes()\n"
+ " nodes_w3 => f2_proxy%vspace%get_nodes()\n"
)
- assert init_code in gen_code
+ assert init_code in code
alloc_code = (
- " dim_w0 = f0_proxy%vspace%get_dim_space()\n"
- " diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n"
- " dim_w2 = op1_proxy%fs_to%get_dim_space()\n"
- " diff_dim_w2 = op1_proxy%fs_to%get_dim_space_diff()\n"
- " diff_dim_w3 = f2_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (basis_w0_on_w0(dim_w0, ndf_w0, ndf_w0))\n"
- " ALLOCATE (diff_basis_w1_on_w0(diff_dim_w1, ndf_w1, "
+ " dim_w0 = f0_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n"
+ " dim_w2 = op1_proxy%fs_to%get_dim_space()\n"
+ " diff_dim_w2 = op1_proxy%fs_to%get_dim_space_diff()\n"
+ " diff_dim_w3 = f2_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(basis_w0_on_w0(dim_w0,ndf_w0,ndf_w0))\n"
+ " ALLOCATE(diff_basis_w1_on_w0(diff_dim_w1,ndf_w1,"
"ndf_w0))\n"
- " ALLOCATE (basis_w2_on_w3(dim_w2, ndf_w2, ndf_w3))\n"
- " ALLOCATE (diff_basis_w2_on_w3(diff_dim_w2, ndf_w2, "
+ " ALLOCATE(basis_w2_on_w3(dim_w2,ndf_w2,ndf_w3))\n"
+ " ALLOCATE(diff_basis_w2_on_w3(diff_dim_w2,ndf_w2,"
"ndf_w3))\n"
- " ALLOCATE (diff_basis_w3_on_w3(diff_dim_w3, ndf_w3, "
+ " ALLOCATE(diff_basis_w3_on_w3(diff_dim_w3,ndf_w3,"
"ndf_w3))\n"
)
- assert alloc_code in gen_code
+ assert alloc_code in code
# testkern_eval requires diff-basis fns on W1 and testkern_eval_op_to
# requires them on W2 and W3.
basis_comp = (
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w0=1,ndf_w0\n"
- " basis_w0_on_w0(:,df_w0,df_nodal) = f0_proxy%vspace%"
- "call_function(BASIS,df_w0,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w1=1,ndf_w1\n"
- " diff_basis_w1_on_w0(:,df_w1,df_nodal) = f1_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w1,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w3\n"
- " DO df_w2=1,ndf_w2\n"
- " basis_w2_on_w3(:,df_w2,df_nodal) = op1_proxy%fs_to%"
- "call_function(BASIS,df_w2,nodes_w3(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w3\n"
- " DO df_w2=1,ndf_w2\n"
- " diff_basis_w2_on_w3(:,df_w2,df_nodal) = op1_proxy%fs_to%"
- "call_function(DIFF_BASIS,df_w2,nodes_w3(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w3\n"
- " DO df_w3=1,ndf_w3\n"
- " diff_basis_w3_on_w3(:,df_w3,df_nodal) = f2_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w3,nodes_w3(:,df_nodal))\n"
- " END DO\n"
- " END DO\n")
- assert basis_comp in gen_code
- assert (" loop0_start = 1\n"
- " loop0_stop = f0_proxy%vspace%get_ncell()\n"
- " loop1_start = 1\n"
- " loop1_stop = f2_proxy%vspace%get_ncell()\n" in gen_code)
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w0 = 1, ndf_w0, 1\n"
+ " basis_w0_on_w0(:,df_w0,df_nodal) = f0_proxy%vspace%"
+ "call_function(BASIS, df_w0, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w1 = 1, ndf_w1, 1\n"
+ " diff_basis_w1_on_w0(:,df_w1,df_nodal) = f1_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w1, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w3, 1\n"
+ " do df_w2 = 1, ndf_w2, 1\n"
+ " basis_w2_on_w3(:,df_w2,df_nodal) = op1_proxy%fs_to%"
+ "call_function(BASIS, df_w2, nodes_w3(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w3, 1\n"
+ " do df_w2 = 1, ndf_w2, 1\n"
+ " diff_basis_w2_on_w3(:,df_w2,df_nodal) = op1_proxy%fs_to%"
+ "call_function(DIFF_BASIS, df_w2, nodes_w3(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w3, 1\n"
+ " do df_w3 = 1, ndf_w3, 1\n"
+ " diff_basis_w3_on_w3(:,df_w3,df_nodal) = f2_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w3, nodes_w3(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n")
+ assert basis_comp in code
+ assert (" loop0_start = 1\n"
+ " loop0_stop = f0_proxy%vspace%get_ncell()\n"
+ " loop1_start = 1\n"
+ " loop1_stop = f2_proxy%vspace%get_ncell()\n" in code)
kernel_calls = (
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_eval_code(nlayers_f0, f0_data, "
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_eval_code(nlayers_f0, f0_data, "
"f1_data, ndf_w0, undf_w0, map_w0(:,cell), basis_w0_on_w0, "
"ndf_w1, undf_w1, map_w1(:,cell), diff_basis_w1_on_w0)\n"
- " END DO\n"
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL testkern_eval_op_to_code(cell, nlayers_op1, "
+ " enddo\n"
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call testkern_eval_op_to_code(cell, nlayers_op1, "
"op1_proxy%ncell_3d, op1_local_stencil, f2_data, "
"ndf_w2, basis_w2_on_w3, diff_basis_w2_on_w3, ndf_w0, ndf_w3, "
"undf_w3, map_w3(:,cell), diff_basis_w3_on_w3)\n"
- " END DO\n"
+ " enddo\n"
)
- assert kernel_calls in gen_code
+ assert kernel_calls in code
def test_eval_diff_nodal_space(tmpdir):
@@ -1121,96 +1159,94 @@ def test_eval_diff_nodal_space(tmpdir):
"6.6_2eval_diff_nodal_space_invoke.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
+ code = str(psy.gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
expected_alloc = (
- " nodes_w3 => f1_proxy%vspace%get_nodes()\n"
- " nodes_w0 => op1_proxy%fs_from%get_nodes()\n"
- " !\n"
- " ! Allocate basis/diff-basis arrays\n"
- " !\n"
- " dim_w2 = op2_proxy%fs_to%get_dim_space()\n"
- " diff_dim_w2 = op2_proxy%fs_to%get_dim_space_diff()\n"
- " diff_dim_w3 = f1_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (basis_w2_on_w3(dim_w2, ndf_w2, ndf_w3))\n"
- " ALLOCATE (diff_basis_w2_on_w3(diff_dim_w2, ndf_w2, ndf_w3))\n"
- " ALLOCATE (diff_basis_w3_on_w3(diff_dim_w3, ndf_w3, ndf_w3))\n"
- " ALLOCATE (basis_w2_on_w0(dim_w2, ndf_w2, ndf_w0))\n"
- " ALLOCATE (diff_basis_w2_on_w0(diff_dim_w2, ndf_w2, ndf_w0))\n"
- " ALLOCATE (diff_basis_w3_on_w0(diff_dim_w3, ndf_w3, ndf_w0))\n"
+ " nodes_w3 => f1_proxy%vspace%get_nodes()\n"
+ " nodes_w0 => op1_proxy%fs_from%get_nodes()\n"
+ "\n"
+ " ! Allocate basis/diff-basis arrays\n"
+ " dim_w2 = op2_proxy%fs_to%get_dim_space()\n"
+ " diff_dim_w2 = op2_proxy%fs_to%get_dim_space_diff()\n"
+ " diff_dim_w3 = f1_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(basis_w2_on_w3(dim_w2,ndf_w2,ndf_w3))\n"
+ " ALLOCATE(diff_basis_w2_on_w3(diff_dim_w2,ndf_w2,ndf_w3))\n"
+ " ALLOCATE(diff_basis_w3_on_w3(diff_dim_w3,ndf_w3,ndf_w3))\n"
+ " ALLOCATE(basis_w2_on_w0(dim_w2,ndf_w2,ndf_w0))\n"
+ " ALLOCATE(diff_basis_w2_on_w0(diff_dim_w2,ndf_w2,ndf_w0))\n"
+ " ALLOCATE(diff_basis_w3_on_w0(diff_dim_w3,ndf_w3,ndf_w0))\n"
)
- assert expected_alloc in gen_code
+ assert expected_alloc in code
expected_compute = (
- " DO df_nodal=1,ndf_w3\n"
- " DO df_w2=1,ndf_w2\n"
- " basis_w2_on_w3(:,df_w2,df_nodal) = op2_proxy%fs_to%"
- "call_function(BASIS,df_w2,nodes_w3(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w3\n"
- " DO df_w2=1,ndf_w2\n"
- " diff_basis_w2_on_w3(:,df_w2,df_nodal) = op2_proxy%fs_to%"
- "call_function(DIFF_BASIS,df_w2,nodes_w3(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w3\n"
- " DO df_w3=1,ndf_w3\n"
- " diff_basis_w3_on_w3(:,df_w3,df_nodal) = f1_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w3,nodes_w3(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w2=1,ndf_w2\n"
- " basis_w2_on_w0(:,df_w2,df_nodal) = op1_proxy%fs_to%"
- "call_function(BASIS,df_w2,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w2=1,ndf_w2\n"
- " diff_basis_w2_on_w0(:,df_w2,df_nodal) = op1_proxy%fs_to%"
- "call_function(DIFF_BASIS,df_w2,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w3=1,ndf_w3\n"
- " diff_basis_w3_on_w0(:,df_w3,df_nodal) = f0_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w3,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n"
+ " do df_nodal = 1, ndf_w3, 1\n"
+ " do df_w2 = 1, ndf_w2, 1\n"
+ " basis_w2_on_w3(:,df_w2,df_nodal) = op2_proxy%fs_to%"
+ "call_function(BASIS, df_w2, nodes_w3(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w3, 1\n"
+ " do df_w2 = 1, ndf_w2, 1\n"
+ " diff_basis_w2_on_w3(:,df_w2,df_nodal) = op2_proxy%fs_to%"
+ "call_function(DIFF_BASIS, df_w2, nodes_w3(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w3, 1\n"
+ " do df_w3 = 1, ndf_w3, 1\n"
+ " diff_basis_w3_on_w3(:,df_w3,df_nodal) = f1_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w3, nodes_w3(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w2 = 1, ndf_w2, 1\n"
+ " basis_w2_on_w0(:,df_w2,df_nodal) = op1_proxy%fs_to%"
+ "call_function(BASIS, df_w2, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w2 = 1, ndf_w2, 1\n"
+ " diff_basis_w2_on_w0(:,df_w2,df_nodal) = op1_proxy%fs_to%"
+ "call_function(DIFF_BASIS, df_w2, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w3 = 1, ndf_w3, 1\n"
+ " diff_basis_w3_on_w0(:,df_w3,df_nodal) = f0_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w3, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n"
)
- assert expected_compute in gen_code
+ assert expected_compute in code
- assert (" loop0_start = 1\n"
- " loop0_stop = f1_proxy%vspace%get_ncell()\n"
- " loop1_start = 1\n"
- " loop1_stop = f2_proxy%vspace%get_ncell()\n" in gen_code)
+ assert (" loop0_start = 1\n"
+ " loop0_stop = f1_proxy%vspace%get_ncell()\n"
+ " loop1_start = 1\n"
+ " loop1_stop = f2_proxy%vspace%get_ncell()\n" in code)
expected_kern_call = (
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_eval_op_to_code(cell, nlayers_op2, "
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_eval_op_to_code(cell, nlayers_op2, "
"op2_proxy%ncell_3d, op2_local_stencil, f1_data, "
"ndf_w2, basis_w2_on_w3, diff_basis_w2_on_w3, ndf_w0, ndf_w3, "
"undf_w3, map_w3(:,cell), diff_basis_w3_on_w3)\n"
- " END DO\n"
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL testkern_eval_op_to_w0_code(cell, nlayers_op1, "
+ " enddo\n"
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call testkern_eval_op_to_w0_code(cell, nlayers_op1, "
"op1_proxy%ncell_3d, op1_local_stencil, f0_data, "
"f2_data, ndf_w2, basis_w2_on_w0, diff_basis_w2_on_w0, "
"ndf_w0, undf_w0, map_w0(:,cell), ndf_w3, undf_w3, map_w3(:,cell), "
"diff_basis_w3_on_w0)\n"
- " END DO\n"
+ " enddo\n"
)
- assert expected_kern_call in gen_code
+ assert expected_kern_call in code
expected_dealloc = (
- " ! Deallocate basis arrays\n"
- " !\n"
- " DEALLOCATE ("
+ " ! Deallocate basis arrays\n"
+ " DEALLOCATE("
"basis_w2_on_w0, basis_w2_on_w3, diff_basis_w2_on_w0, "
"diff_basis_w2_on_w3, diff_basis_w3_on_w0, diff_basis_w3_on_w3)\n"
)
- assert expected_dealloc in gen_code
+ assert expected_dealloc in code
def test_eval_2fs(tmpdir):
@@ -1220,21 +1256,22 @@ def test_eval_2fs(tmpdir):
os.path.join(BASE_PATH,
"6.8_eval_2fs_invoke.f90"), api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
-
- assert (" REAL(KIND=r_def), allocatable :: "
- "diff_basis_w1_on_w0(:,:,:), diff_basis_w1_on_w1(:,:,:)\n"
- " INTEGER(KIND=i_def) diff_dim_w1\n" in
- gen_code)
- assert (" diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (diff_basis_w1_on_w0(diff_dim_w1, ndf_w1, "
+ code = str(psy.gen)
+
+ assert ("real(kind=r_def), allocatable :: diff_basis_w1_on_w0(:,:,:)"
+ in code)
+ assert ("real(kind=r_def), allocatable :: diff_basis_w1_on_w1(:,:,:)"
+ in code)
+ assert "integer(kind=i_def) :: diff_dim_w1" in code
+ assert (" diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(diff_basis_w1_on_w0(diff_dim_w1,ndf_w1,"
"ndf_w0))\n"
- " ALLOCATE (diff_basis_w1_on_w1(diff_dim_w1, ndf_w1, "
- "ndf_w1))\n" in gen_code)
- assert ("CALL testkern_eval_2fs_code(nlayers_f0, f0_data, "
+ " ALLOCATE(diff_basis_w1_on_w1(diff_dim_w1,ndf_w1,"
+ "ndf_w1))\n" in code)
+ assert ("call testkern_eval_2fs_code(nlayers_f0, f0_data, "
"f1_data, ndf_w0, undf_w0, map_w0(:,cell), ndf_w1, undf_w1, "
"map_w1(:,cell), diff_basis_w1_on_w0, diff_basis_w1_on_w1)" in
- gen_code)
+ code)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -1246,23 +1283,25 @@ def test_2eval_2fs(tmpdir):
os.path.join(BASE_PATH,
"6.9_2eval_2fs_invoke.f90"), api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
+ code = str(psy.gen)
- assert ("REAL(KIND=r_def), allocatable :: diff_basis_w1_on_w0(:,:,:), "
- "diff_basis_w1_on_w1(:,:,:)\n" in gen_code)
+ assert ("real(kind=r_def), allocatable :: diff_basis_w1_on_w0(:,:,:)"
+ in code)
+ assert ("real(kind=r_def), allocatable :: diff_basis_w1_on_w1(:,:,:)"
+ in code)
# Check for duplication
for idx in range(2):
- assert gen_code.count(f"REAL(KIND=r_def), pointer :: nodes_w{idx}(:,:)"
- f" => null()") == 1
- assert gen_code.count(
- f" nodes_w{idx} => f{idx}_proxy%vspace%get_nodes()\n") == 1
+ assert code.count(f"real(kind=r_def), pointer :: nodes_w{idx}(:,:)"
+ f" => null()") == 1
+ assert code.count(
+ f" nodes_w{idx} => f{idx}_proxy%vspace%get_nodes()\n") == 1
- assert gen_code.count(f"ALLOCATE (diff_basis_w1_on_w{idx}(diff_dim_w1,"
- f" ndf_w1, ndf_w{idx}))") == 1
+ assert code.count(f"ALLOCATE(diff_basis_w1_on_w{idx}(diff_dim_w1,"
+ f"ndf_w1,ndf_w{idx}))") == 1
- assert gen_code.count(
+ assert code.count(
f"diff_basis_w1_on_w{idx}(:,df_w1,df_nodal) = f1_proxy%vspace%"
- f"call_function(DIFF_BASIS,df_w1,nodes_w{idx}(:,df_nodal))") == 1
+ f"call_function(DIFF_BASIS, df_w1, nodes_w{idx}(:,df_nodal))") == 1
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -1273,105 +1312,112 @@ def test_2eval_1qr_2fs(tmpdir):
os.path.join(BASE_PATH,
"6.10_2eval_2fs_qr_invoke.f90"), api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
-
- assert gen_code.count(
- "REAL(KIND=r_def), allocatable :: diff_basis_w1_on_w0(:,:,:), "
- "diff_basis_w1_on_w1(:,:,:), basis_w2_on_w0(:,:,:), "
- "diff_basis_w3_on_w0(:,:,:), basis_w1_qr(:,:,:,:), "
- "diff_basis_w2_qr(:,:,:,:), basis_w3_qr(:,:,:,:), "
- "diff_basis_w3_qr(:,:,:,:)\n") == 1
+ code = str(psy.gen)
+
+ assert ("real(kind=r_def), allocatable :: diff_basis_w1_on_w0(:,:,:)"
+ in code)
+ assert ("real(kind=r_def), allocatable :: diff_basis_w1_on_w1(:,:,:)"
+ in code)
+ assert "real(kind=r_def), allocatable :: basis_w2_on_w0(:,:,:)" in code
+ assert ("real(kind=r_def), allocatable :: diff_basis_w3_on_w0(:,:,:)"
+ in code)
+ assert "real(kind=r_def), allocatable :: basis_w1_qr(:,:,:,:)" in code
+ assert ("real(kind=r_def), allocatable :: diff_basis_w2_qr(:,:,:,:)"
+ in code)
+ assert "real(kind=r_def), allocatable :: basis_w3_qr(:,:,:,:)" in code
+ assert ("real(kind=r_def), allocatable :: diff_basis_w3_qr(:,:,:,:)"
+ in code)
# 1st kernel requires diff basis on W1, evaluated at W0 and W1
# 2nd kernel requires diff basis on W3, evaluated at W0
- assert gen_code.count(
- " diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n") == 1
- assert gen_code.count(
- " ALLOCATE (diff_basis_w1_on_w0(diff_dim_w1, ndf_w1, ndf_w0))\n"
- " ALLOCATE (diff_basis_w1_on_w1(diff_dim_w1, ndf_w1, "
+ assert code.count(
+ " diff_dim_w1 = f1_proxy%vspace%get_dim_space_diff()\n") == 1
+ assert code.count(
+ " ALLOCATE(diff_basis_w1_on_w0(diff_dim_w1,ndf_w1,ndf_w0))\n"
+ " ALLOCATE(diff_basis_w1_on_w1(diff_dim_w1,ndf_w1,"
"ndf_w1))\n") == 1
- assert gen_code.count(
- " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n") == 1
- assert gen_code.count(
- " ALLOCATE (diff_basis_w3_on_w0(diff_dim_w3, ndf_w3, "
+ assert code.count(
+ " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n") == 1
+ assert code.count(
+ " ALLOCATE(diff_basis_w3_on_w0(diff_dim_w3,ndf_w3,"
"ndf_w0))\n") == 1
- assert gen_code.count(
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w1=1,ndf_w1\n"
- " diff_basis_w1_on_w0(:,df_w1,df_nodal) = "
- "f1_proxy%vspace%call_function(DIFF_BASIS,df_w1,nodes_w0(:,"
+ assert code.count(
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w1 = 1, ndf_w1, 1\n"
+ " diff_basis_w1_on_w0(:,df_w1,df_nodal) = "
+ "f1_proxy%vspace%call_function(DIFF_BASIS, df_w1, nodes_w0(:,"
"df_nodal))\n"
- " END DO\n"
- " END DO\n") == 1
- assert gen_code.count(
- " DO df_nodal=1,ndf_w1\n"
- " DO df_w1=1,ndf_w1\n"
- " diff_basis_w1_on_w1(:,df_w1,df_nodal) = f1_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w1,nodes_w1(:,df_nodal))\n"
- " END DO\n"
- " END DO\n") == 1
- assert gen_code.count(
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w3=1,ndf_w3\n"
- " diff_basis_w3_on_w0(:,df_w3,df_nodal) = m2_proxy%vspace%"
- "call_function(DIFF_BASIS,df_w3,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n") == 1
+ " enddo\n"
+ " enddo\n") == 1
+ assert code.count(
+ " do df_nodal = 1, ndf_w1, 1\n"
+ " do df_w1 = 1, ndf_w1, 1\n"
+ " diff_basis_w1_on_w1(:,df_w1,df_nodal) = f1_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w1, nodes_w1(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n") == 1
+ assert code.count(
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w3 = 1, ndf_w3, 1\n"
+ " diff_basis_w3_on_w0(:,df_w3,df_nodal) = m2_proxy%vspace%"
+ "call_function(DIFF_BASIS, df_w3, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n") == 1
# 2nd kernel requires basis on W2 and diff-basis on W3, both evaluated
# on W0 (the to-space of the operator that is written to)
- assert gen_code.count(
- " dim_w2 = op1_proxy%fs_from%get_dim_space()\n") == 1
- assert gen_code.count(
- " ALLOCATE (basis_w2_on_w0(dim_w2, ndf_w2, ndf_w0))\n") == 1
-
- assert gen_code.count(
- " DO df_nodal=1,ndf_w0\n"
- " DO df_w2=1,ndf_w2\n"
- " basis_w2_on_w0(:,df_w2,df_nodal) = op1_proxy%fs_from%"
- "call_function(BASIS,df_w2,nodes_w0(:,df_nodal))\n"
- " END DO\n"
- " END DO\n") == 1
+ assert code.count(
+ " dim_w2 = op1_proxy%fs_from%get_dim_space()\n") == 1
+ assert code.count(
+ " ALLOCATE(basis_w2_on_w0(dim_w2,ndf_w2,ndf_w0))\n") == 1
+
+ assert code.count(
+ " do df_nodal = 1, ndf_w0, 1\n"
+ " do df_w2 = 1, ndf_w2, 1\n"
+ " basis_w2_on_w0(:,df_w2,df_nodal) = op1_proxy%fs_from%"
+ "call_function(BASIS, df_w2, nodes_w0(:,df_nodal))\n"
+ " enddo\n"
+ " enddo\n") == 1
# 3rd kernel requires XYoZ quadrature: basis on W1, diff basis on W2 and
# basis+diff basis on W3.
- assert gen_code.count(
- " CALL qr%compute_function(DIFF_BASIS, f2_proxy%vspace, "
+ assert code.count(
+ " call qr%compute_function(DIFF_BASIS, f2_proxy%vspace, "
"diff_dim_w2, ndf_w2, diff_basis_w2_qr)\n") == 1
- assert gen_code.count(
- " CALL qr%compute_function(DIFF_BASIS, m2_proxy%vspace, "
+ assert code.count(
+ " call qr%compute_function(DIFF_BASIS, m2_proxy%vspace, "
"diff_dim_w3, ndf_w3, diff_basis_w3_qr)\n") == 1
- assert (" loop0_start = 1\n"
- " loop0_stop = f0_proxy%vspace%get_ncell()\n"
- " loop1_start = 1\n"
- " loop1_stop = op1_proxy%fs_from%get_ncell()\n"
- " loop2_start = 1\n"
- " loop2_stop = f1_proxy%vspace%get_ncell()\n" in gen_code)
+ assert (" loop0_start = 1\n"
+ " loop0_stop = f0_proxy%vspace%get_ncell()\n"
+ " loop1_start = 1\n"
+ " loop1_stop = op1_proxy%fs_from%get_ncell()\n"
+ " loop2_start = 1\n"
+ " loop2_stop = f1_proxy%vspace%get_ncell()\n" in code)
- assert (" DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_eval_2fs_code(nlayers_f0, f0_data, "
+ assert (" do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_eval_2fs_code(nlayers_f0, f0_data, "
"f1_data, ndf_w0, undf_w0, map_w0(:,cell), ndf_w1, undf_w1,"
" map_w1(:,cell), diff_basis_w1_on_w0, diff_basis_w1_on_w1)\n"
- " END DO\n"
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL testkern_eval_op_code(cell, nlayers_op1, "
+ " enddo\n"
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call testkern_eval_op_code(cell, nlayers_op1, "
"op1_proxy%ncell_3d, op1_local_stencil, m2_data, "
"ndf_w0, ndf_w2, basis_w2_on_w0, ndf_w3, undf_w3, map_w3(:,cell),"
" diff_basis_w3_on_w0)\n"
- " END DO\n"
- " DO cell = loop2_start, loop2_stop, 1\n"
- " CALL testkern_qr_code(nlayers_f1, f1_data, "
+ " enddo\n"
+ " do cell = loop2_start, loop2_stop, 1\n"
+ " call testkern_qr_code(nlayers_f1, f1_data, "
"f2_data, m1_data, a, m2_data, istp, ndf_w1, "
"undf_w1, map_w1(:,cell), basis_w1_qr, ndf_w2, undf_w2, "
"map_w2(:,cell), diff_basis_w2_qr, ndf_w3, undf_w3, "
"map_w3(:,cell), basis_w3_qr, diff_basis_w3_qr, np_xy_qr, "
"np_z_qr, weights_xy_qr, weights_z_qr)\n"
- " END DO\n" in gen_code)
+ " enddo\n" in code)
- assert gen_code.count(
- "DEALLOCATE (basis_w1_qr, basis_w2_on_w0, basis_w3_qr, "
+ assert code.count(
+ "DEALLOCATE(basis_w1_qr, basis_w2_on_w0, basis_w3_qr, "
"diff_basis_w1_on_w0, diff_basis_w1_on_w1, diff_basis_w2_qr, "
"diff_basis_w3_on_w0, diff_basis_w3_qr)\n") == 1
@@ -1385,11 +1431,11 @@ def test_eval_agglomerate(tmpdir):
os.path.join(BASE_PATH,
"6.11_2eval_2kern_invoke.f90"), api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
- gen_code = str(psy.gen)
+ code = str(psy.gen)
# We should compute differential basis functions for W1 evaluated on both
# W0 and W1.
- assert gen_code.count("diff_basis_w1_on_w0(:,df_w1,df_nodal) = ") == 1
- assert gen_code.count("diff_basis_w1_on_w1(:,df_w1,df_nodal) = ") == 1
+ assert code.count("diff_basis_w1_on_w0(:,df_w1,df_nodal) = ") == 1
+ assert code.count("diff_basis_w1_on_w1(:,df_w1,df_nodal) = ") == 1
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -1436,7 +1482,7 @@ def test_eval_agglomerate(tmpdir):
'''
-def test_basis_evaluator():
+def test_basis_evaluator(fortran_writer):
''' Check that basis functions for an evaluator are handled correctly for
kernel stubs.
@@ -1445,10 +1491,10 @@ def test_basis_evaluator():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
+ code = fortran_writer(kernel.gen_stub)
- output_arg_list = (
- " SUBROUTINE dummy_code(cell, nlayers, field_1_w0, op_2_ncell_3d, "
+ assert (
+ "subroutine dummy_code(cell, nlayers, field_1_w0, op_2_ncell_3d, "
"op_2, field_3_w2, op_4_ncell_3d, op_4, field_5_wtheta, "
"op_6_ncell_3d, op_6, field_7_w2v, op_8_ncell_3d, op_8, field_9_wchi, "
"op_10_ncell_3d, op_10, field_11_w2vtrace, op_12_ncell_3d, op_12, "
@@ -1459,86 +1505,93 @@ def test_basis_evaluator():
"ndf_w2broken, basis_w2broken_on_w0, ndf_wchi, undf_wchi, map_wchi, "
"basis_wchi_on_w0, ndf_w2trace, basis_w2trace_on_w0, ndf_w2vtrace, "
"undf_w2vtrace, map_w2vtrace, basis_w2vtrace_on_w0, ndf_w2htrace, "
- "basis_w2htrace_on_w0)\n")
- assert output_arg_list in generated_code
- output_declns = (
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w0\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w0) :: map_w0\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2) :: map_w2\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2v\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2v) "
- ":: map_w2v\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2vtrace\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2vtrace) "
- ":: map_w2vtrace\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wchi\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wchi) "
- ":: map_wchi\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wtheta\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wtheta) "
- ":: map_wtheta\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w0, ndf_w1, undf_w2, "
- "ndf_w3, undf_wtheta, ndf_w2h, undf_w2v, ndf_w2broken, undf_wchi, "
- "ndf_w2trace, undf_w2vtrace, ndf_w2htrace\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w0) "
- ":: field_1_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2) "
- ":: field_3_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_wtheta) "
- ":: field_5_wtheta\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2v) "
- ":: field_7_w2v\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_wchi) "
- ":: field_9_wchi\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2vtrace) "
- ":: field_11_w2vtrace\n"
- " INTEGER(KIND=i_def), intent(in) :: cell\n"
- " INTEGER(KIND=i_def), intent(in) :: op_2_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_2_ncell_3d,"
- "ndf_w1,ndf_w1) :: op_2\n"
- " INTEGER(KIND=i_def), intent(in) :: op_4_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_4_ncell_3d,"
- "ndf_w3,ndf_w3) :: op_4\n"
- " INTEGER(KIND=i_def), intent(in) :: op_6_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_6_ncell_3d,"
- "ndf_w2h,ndf_w2h) :: op_6\n"
- " INTEGER(KIND=i_def), intent(in) :: op_8_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_8_ncell_3d,"
- "ndf_w2broken,ndf_w2broken) :: op_8\n"
- " INTEGER(KIND=i_def), intent(in) :: op_10_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_10_ncell_3d,"
- "ndf_w2trace,ndf_w2trace) :: op_10\n"
- " INTEGER(KIND=i_def), intent(in) :: op_12_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_12_ncell_3d,"
- "ndf_w2htrace,ndf_w2htrace) :: op_12\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w0,ndf_w0) "
- ":: basis_w0_on_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w1,ndf_w0) "
- ":: basis_w1_on_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2,ndf_w0) "
- ":: basis_w2_on_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w3,ndf_w0) "
- ":: basis_w3_on_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_wtheta,ndf_w0) "
- ":: basis_wtheta_on_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2h,ndf_w0) "
- ":: basis_w2h_on_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2v,ndf_w0) "
- ":: basis_w2v_on_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2broken,"
- "ndf_w0) :: basis_w2broken_on_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_wchi,ndf_w0) "
- ":: basis_wchi_on_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2trace,"
- "ndf_w0) :: basis_w2trace_on_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2vtrace,"
- "ndf_w0) :: basis_w2vtrace_on_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2htrace,"
- "ndf_w0) :: basis_w2htrace_on_w0\n"
- )
- assert output_declns in generated_code
+ "basis_w2htrace_on_w0)" in code)
+ assert "integer(kind=i_def), intent(in) :: nlayers" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w0" in code
+ assert ("integer(kind=i_def), dimension(ndf_w0), intent(in) "
+ ":: map_w0" in code)
+ assert "integer(kind=i_def), intent(in) :: ndf_w2" in code
+ assert ("integer(kind=i_def), dimension(ndf_w2), intent(in) :: map_w2"
+ in code)
+ assert "integer(kind=i_def), intent(in) :: ndf_w2v" in code
+ assert ("integer(kind=i_def), dimension(ndf_w2v), intent(in) :: map_w2v"
+ in code)
+ assert "integer(kind=i_def), intent(in) :: ndf_w2vtrace" in code
+ assert ("integer(kind=i_def), dimension(ndf_w2vtrace), intent(in) "
+ ":: map_w2vtrace" in code)
+ assert "integer(kind=i_def), intent(in) :: ndf_wchi" in code
+ assert ("integer(kind=i_def), dimension(ndf_wchi), intent(in) "
+ ":: map_wchi") in code
+ assert "integer(kind=i_def), intent(in) :: ndf_wtheta" in code
+ assert ("integer(kind=i_def), dimension(ndf_wtheta), intent(in) "
+ ":: map_wtheta" in code)
+ assert "integer(kind=i_def), intent(in) :: undf_w0" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w1" in code
+ assert "integer(kind=i_def), intent(in) :: undf_w2" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w3" in code
+ assert "integer(kind=i_def), intent(in) :: undf_wtheta" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w2h" in code
+ assert "integer(kind=i_def), intent(in) :: undf_w2v" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w2broken" in code
+ assert "integer(kind=i_def), intent(in) :: undf_wchi" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w2trace" in code
+ assert "integer(kind=i_def), intent(in) :: undf_w2vtrace" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w2htrace" in code
+ assert ("real(kind=r_def), dimension(undf_w0), intent(inout) "
+ ":: field_1_w0" in code)
+ assert ("real(kind=r_def), dimension(undf_w2), intent(in) "
+ ":: field_3_w2" in code)
+ assert ("real(kind=r_def), dimension(undf_wtheta), intent(in) "
+ ":: field_5_wtheta" in code)
+ assert ("real(kind=r_def), dimension(undf_w2v), intent(in) "
+ ":: field_7_w2v" in code)
+ assert ("real(kind=r_def), dimension(undf_wchi), intent(in) "
+ ":: field_9_wchi" in code)
+ assert ("real(kind=r_def), dimension(undf_w2vtrace), intent(in) "
+ ":: field_11_w2vtrace" in code)
+ assert "integer(kind=i_def), intent(in) :: cell" in code
+ assert "integer(kind=i_def), intent(in) :: op_2_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_2_ncell_3d,ndf_w1,ndf_w1), "
+ "intent(in) :: op_2" in code)
+ assert "integer(kind=i_def), intent(in) :: op_4_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_4_ncell_3d,ndf_w3,ndf_w3), "
+ "intent(in) :: op_4" in code)
+ assert "integer(kind=i_def), intent(in) :: op_6_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_6_ncell_3d,ndf_w2h,ndf_w2h), "
+ "intent(in) :: op_6" in code)
+ assert "integer(kind=i_def), intent(in) :: op_8_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_8_ncell_3d,ndf_w2broken,"
+ "ndf_w2broken), intent(in) :: op_8" in code)
+ assert "integer(kind=i_def), intent(in) :: op_10_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_10_ncell_3d,ndf_w2trace,"
+ "ndf_w2trace), intent(in) :: op_10" in code)
+ assert "integer(kind=i_def), intent(in) :: op_12_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_12_ncell_3d,ndf_w2htrace,"
+ "ndf_w2htrace), intent(in) :: op_12" in code)
+ assert ("real(kind=r_def), dimension(1,ndf_w0,ndf_w0), intent(in) "
+ ":: basis_w0_on_w0" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_w1,ndf_w0), intent(in) "
+ ":: basis_w1_on_w0" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_w2,ndf_w0), intent(in) "
+ ":: basis_w2_on_w0" in code)
+ assert ("real(kind=r_def), dimension(1,ndf_w3,ndf_w0), intent(in) "
+ ":: basis_w3_on_w0" in code)
+ assert ("real(kind=r_def), dimension(1,ndf_wtheta,ndf_w0), intent(in) "
+ ":: basis_wtheta_on_w0" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_w2h,ndf_w0), intent(in) "
+ ":: basis_w2h_on_w0" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_w2v,ndf_w0), intent(in) "
+ ":: basis_w2v_on_w0" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_w2broken,ndf_w0), intent(in) "
+ ":: basis_w2broken_on_w0\n" in code)
+ assert ("real(kind=r_def), dimension(1,ndf_wchi,ndf_w0), intent(in) "
+ ":: basis_wchi_on_w0\n" in code)
+ assert ("real(kind=r_def), dimension(1,ndf_w2trace,ndf_w0), intent(in) "
+ ":: basis_w2trace_on_w0\n" in code)
+ assert ("real(kind=r_def), dimension(1,ndf_w2vtrace,ndf_w0), intent(in) "
+ ":: basis_w2vtrace_on_w0\n" in code)
+ assert ("real(kind=r_def), dimension(1,ndf_w2htrace,ndf_w0), intent(in) "
+ ":: basis_w2htrace_on_w0\n" in code)
BASIS_UNSUPPORTED_SPACE = '''
@@ -1639,7 +1692,7 @@ def test_basis_unsupported_space():
'''
-def test_diff_basis():
+def test_diff_basis(fortran_writer):
''' Test that differential basis functions are handled correctly
for kernel stubs with quadrature.
@@ -1648,12 +1701,13 @@ def test_diff_basis():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
- output = (
- " MODULE dummy_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE dummy_code(cell, nlayers, field_1_w0, op_2_ncell_3d, "
+ code = fortran_writer(kernel.gen_stub)
+ assert (
+ "module dummy_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine dummy_code(cell, nlayers, field_1_w0, op_2_ncell_3d, "
"op_2, field_3_w2, op_4_ncell_3d, op_4, field_5_wtheta, "
"op_6_ncell_3d, op_6, field_7_w2v, op_8_ncell_3d, op_8, field_9_wchi, "
"op_10_ncell_3d, op_10, field_11_w2htrace, op_12_ncell_3d, op_12, "
@@ -1668,92 +1722,100 @@ def test_diff_basis():
"map_w2htrace, diff_basis_w2htrace_qr_xyoz, ndf_w2vtrace, "
"diff_basis_w2vtrace_qr_xyoz, np_xy_qr_xyoz, np_z_qr_xyoz, "
"weights_xy_qr_xyoz, weights_z_qr_xyoz)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w0\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w0) :: map_w0\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2) :: map_w2\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2htrace\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2htrace) "
- ":: map_w2htrace\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2v\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2v) "
- ":: map_w2v\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wchi\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wchi) "
- ":: map_wchi\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wtheta\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wtheta) "
- ":: map_wtheta\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w0, ndf_w1, undf_w2, "
- "ndf_w3, undf_wtheta, ndf_w2h, undf_w2v, ndf_w2broken, undf_wchi, "
- "ndf_w2trace, undf_w2htrace, ndf_w2vtrace\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w0) "
- ":: field_1_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2) "
- ":: field_3_w2\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_wtheta) "
- ":: field_5_wtheta\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2v) "
- ":: field_7_w2v\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_wchi) "
- ":: field_9_wchi\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w2htrace) "
- ":: field_11_w2htrace\n"
- " INTEGER(KIND=i_def), intent(in) :: cell\n"
- " INTEGER(KIND=i_def), intent(in) :: op_2_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_2_ncell_3d,"
- "ndf_w1,ndf_w1) :: op_2\n"
- " INTEGER(KIND=i_def), intent(in) :: op_4_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_4_ncell_3d,"
- "ndf_w3,ndf_w3) :: op_4\n"
- " INTEGER(KIND=i_def), intent(in) :: op_6_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_6_ncell_3d,"
- "ndf_w2h,ndf_w2h) :: op_6\n"
- " INTEGER(KIND=i_def), intent(in) :: op_8_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_8_ncell_3d,"
- "ndf_w2broken,ndf_w2broken) :: op_8\n"
- " INTEGER(KIND=i_def), intent(in) :: op_10_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_10_ncell_3d,"
- "ndf_w2trace,ndf_w2trace) :: op_10\n"
- " INTEGER(KIND=i_def), intent(in) :: op_12_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_12_ncell_3d,"
- "ndf_w2vtrace,ndf_w2vtrace) :: op_12\n"
- " INTEGER(KIND=i_def), intent(in) :: np_xy_qr_xyoz, "
- "np_z_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w0,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w0_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w1,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w1_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w2_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w3,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w3_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_wtheta,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_wtheta_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2h,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w2h_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2v,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w2v_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2broken,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w2broken_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_wchi,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_wchi_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2trace,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w2trace_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2htrace,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w2htrace_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2vtrace,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_w2vtrace_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_xy_qr_xyoz) "
- ":: weights_xy_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_z_qr_xyoz) "
- ":: weights_z_qr_xyoz\n"
- " END SUBROUTINE dummy_code\n"
- " END MODULE dummy_mod")
- assert output in generated_code
+ " use constants_mod\n") in code
+
+ assert "integer(kind=i_def), intent(in) :: nlayers" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w0" in code
+ assert ("integer(kind=i_def), dimension(ndf_w0), intent(in) :: map_w0"
+ in code)
+ assert "integer(kind=i_def), intent(in) :: ndf_w2" in code
+ assert ("integer(kind=i_def), dimension(ndf_w2), intent(in) :: map_w2"
+ in code)
+ assert "integer(kind=i_def), intent(in) :: ndf_w2htrace" in code
+ assert ("integer(kind=i_def), dimension(ndf_w2htrace), intent(in) "
+ ":: map_w2htrace" in code)
+ assert "integer(kind=i_def), intent(in) :: ndf_w2v" in code
+ assert ("integer(kind=i_def), dimension(ndf_w2v), intent(in) "
+ ":: map_w2v" in code)
+ assert "integer(kind=i_def), intent(in) :: ndf_wchi" in code
+ assert ("integer(kind=i_def), dimension(ndf_wchi), intent(in) "
+ ":: map_wchi" in code)
+ assert "integer(kind=i_def), intent(in) :: ndf_wtheta" in code
+ assert ("integer(kind=i_def), dimension(ndf_wtheta), intent(in) "
+ ":: map_wtheta" in code)
+ assert "integer(kind=i_def), intent(in) :: undf_w0" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w1" in code
+ assert "integer(kind=i_def), intent(in) :: undf_w2" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w3" in code
+ assert "integer(kind=i_def), intent(in) :: undf_wtheta" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w2h" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w2v" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w2broken" in code
+ assert "integer(kind=i_def), intent(in) :: undf_wchi" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w2trace" in code
+ assert "integer(kind=i_def), intent(in) :: undf_w2htrace" in code
+ assert "integer(kind=i_def), intent(in) :: ndf_w2vtrace" in code
+ assert ("real(kind=r_def), dimension(undf_w0), intent(inout) "
+ ":: field_1_w0" in code)
+ assert ("real(kind=r_def), dimension(undf_w2), intent(in) "
+ ":: field_3_w2" in code)
+ assert ("real(kind=r_def), dimension(undf_wtheta), intent(inout) "
+ ":: field_5_wtheta" in code)
+ assert ("real(kind=r_def), dimension(undf_w2v), intent(in) "
+ ":: field_7_w2v" in code)
+ assert ("real(kind=r_def), dimension(undf_wchi), intent(in) "
+ ":: field_9_wchi" in code)
+ assert ("real(kind=r_def), dimension(undf_w2htrace), intent(inout) "
+ ":: field_11_w2htrace" in code)
+ assert "integer(kind=i_def), intent(in) :: cell" in code
+ assert "integer(kind=i_def), intent(in) :: op_2_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_2_ncell_3d,ndf_w1,ndf_w1"
+ "), intent(inout) :: op_2" in code)
+ assert "integer(kind=i_def), intent(in) :: op_4_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_4_ncell_3d,ndf_w3,ndf_w3), "
+ "intent(inout) :: op_4" in code)
+ assert "integer(kind=i_def), intent(in) :: op_6_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_6_ncell_3d,ndf_w2h,ndf_w2h), "
+ "intent(inout) :: op_6" in code)
+ assert "integer(kind=i_def), intent(in) :: op_8_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_8_ncell_3d,ndf_w2broken,"
+ "ndf_w2broken), intent(inout) :: op_8" in code)
+ assert "integer(kind=i_def), intent(in) :: op_10_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_10_ncell_3d,ndf_w2trace,"
+ "ndf_w2trace), intent(inout) :: op_10" in code)
+ assert "integer(kind=i_def), intent(in) :: op_12_ncell_3d" in code
+ assert ("real(kind=r_def), dimension(op_12_ncell_3d,ndf_w2vtrace,"
+ "ndf_w2vtrace), intent(in) :: op_12" in code)
+ assert "integer(kind=i_def), intent(in) :: np_xy_qr_xyoz" in code
+ assert "integer(kind=i_def), intent(in) :: np_z_qr_xyoz" in code
+ assert ("real(kind=r_def), dimension(3,ndf_w0,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_w0_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_w1,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_w1_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(1,ndf_w2,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_w2_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_w3,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_w3_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_wtheta,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_wtheta_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(1,ndf_w2h,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_w2h_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(1,ndf_w2v,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_w2v_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(1,ndf_w2broken,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_w2broken_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_wchi,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_wchi_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_w2trace,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_w2trace_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_w2htrace,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_w2htrace_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(3,ndf_w2vtrace,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_w2vtrace_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(np_xy_qr_xyoz), intent(in) "
+ ":: weights_xy_qr_xyoz" in code)
+ assert ("real(kind=r_def), dimension(np_z_qr_xyoz), intent(in) "
+ ":: weights_z_qr_xyoz" in code)
# Metadata for a kernel that requires differential basis functions
@@ -1805,7 +1867,7 @@ def test_diff_basis():
'''
-def test_diff_basis_eval():
+def test_diff_basis_eval(fortran_writer):
''' Test that differential basis functions are handled correctly
for kernel stubs with an evaluator.
@@ -1814,13 +1876,15 @@ def test_diff_basis_eval():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
+ generated_code = fortran_writer(kernel.gen_stub)
output_args = (
- " MODULE dummy_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE dummy_code(cell, nlayers, field_1_w0, op_2_ncell_3d, "
+ "module dummy_mod\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ " subroutine dummy_code(cell, nlayers, field_1_w0, op_2_ncell_3d, "
"op_2, field_3_w2, op_4_ncell_3d, op_4, field_5_wtheta, "
"op_6_ncell_3d, op_6, field_7_w2v, op_8_ncell_3d, op_8, field_9_wchi, "
"op_10_ncell_3d, op_10, field_11_w2vtrace, op_12_ncell_3d, op_12, "
@@ -1835,88 +1899,86 @@ def test_diff_basis_eval():
"diff_basis_w2vtrace_on_w2, ndf_w2htrace, "
"diff_basis_w2htrace_on_w2)\n")
assert output_args in generated_code
- output_declns = (
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w0\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w0) :: map_w0\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2) :: map_w2\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2v\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2v) "
- ":: map_w2v\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2vtrace\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2vtrace) "
- ":: map_w2vtrace\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wchi\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wchi) "
- ":: map_wchi\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wtheta\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wtheta) "
- ":: map_wtheta\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w0, undf_w2, ndf_w1, "
- "ndf_w3, undf_wtheta, ndf_w2h, undf_w2v, ndf_w2broken, undf_wchi, "
- "ndf_w2trace, undf_w2vtrace, ndf_w2htrace\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w0) "
- ":: field_1_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2) "
- ":: field_3_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_wtheta) "
- ":: field_5_wtheta\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2v) "
- ":: field_7_w2v\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_wchi) "
- ":: field_9_wchi\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2vtrace) "
- ":: field_11_w2vtrace\n"
- " INTEGER(KIND=i_def), intent(in) :: cell\n"
- " INTEGER(KIND=i_def), intent(in) :: op_2_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_2_ncell_3d,"
- "ndf_w2,ndf_w1) :: op_2\n"
- " INTEGER(KIND=i_def), intent(in) :: op_4_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_4_ncell_3d,"
- "ndf_w3,ndf_w3) :: op_4\n"
- " INTEGER(KIND=i_def), intent(in) :: op_6_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_6_ncell_3d,"
- "ndf_w2h,ndf_w2h) :: op_6\n"
- " INTEGER(KIND=i_def), intent(in) :: op_8_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_8_ncell_3d,"
- "ndf_w2broken,ndf_w2broken) :: op_8\n"
- " INTEGER(KIND=i_def), intent(in) :: op_10_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_10_ncell_3d,"
- "ndf_w2trace,ndf_w2trace) :: op_10\n"
- " INTEGER(KIND=i_def), intent(in) :: op_12_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_12_ncell_3d,"
- "ndf_w2htrace,ndf_w2htrace) :: op_12\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w0,ndf_w2) "
- ":: diff_basis_w0_on_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w1,ndf_w2) "
- ":: diff_basis_w1_on_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2,ndf_w2) "
- ":: diff_basis_w2_on_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w3,ndf_w2) "
- ":: diff_basis_w3_on_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_wtheta,ndf_w2) "
- ":: diff_basis_wtheta_on_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2h,ndf_w2) "
- ":: diff_basis_w2h_on_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2v,ndf_w2) "
- ":: diff_basis_w2v_on_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2broken,"
- "ndf_w2) :: diff_basis_w2broken_on_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_wchi,ndf_w2) "
- ":: diff_basis_wchi_on_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2trace,"
- "ndf_w2) :: diff_basis_w2trace_on_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2vtrace,"
- "ndf_w2) :: diff_basis_w2vtrace_on_w2\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2htrace,"
- "ndf_w2) :: diff_basis_w2htrace_on_w2\n"
- " END SUBROUTINE dummy_code\n"
- )
- assert output_declns in generated_code
-
-
-def test_2eval_stubgen():
+ assert """\
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w0
+ integer(kind=i_def), dimension(ndf_w0), intent(in) :: map_w0
+ integer(kind=i_def), intent(in) :: ndf_w2
+ integer(kind=i_def), dimension(ndf_w2), intent(in) :: map_w2
+ integer(kind=i_def), intent(in) :: ndf_w2v
+ integer(kind=i_def), dimension(ndf_w2v), intent(in) :: map_w2v
+ integer(kind=i_def), intent(in) :: ndf_w2vtrace
+ integer(kind=i_def), dimension(ndf_w2vtrace), intent(in) :: map_w2vtrace
+ integer(kind=i_def), intent(in) :: ndf_wchi
+ integer(kind=i_def), dimension(ndf_wchi), intent(in) :: map_wchi
+ integer(kind=i_def), intent(in) :: ndf_wtheta
+ integer(kind=i_def), dimension(ndf_wtheta), intent(in) :: map_wtheta
+ integer(kind=i_def), intent(in) :: undf_w0
+ integer(kind=i_def), intent(in) :: undf_w2
+ integer(kind=i_def), intent(in) :: ndf_w1
+ integer(kind=i_def), intent(in) :: ndf_w3
+ integer(kind=i_def), intent(in) :: undf_wtheta
+ integer(kind=i_def), intent(in) :: ndf_w2h
+ integer(kind=i_def), intent(in) :: undf_w2v
+ integer(kind=i_def), intent(in) :: ndf_w2broken
+ integer(kind=i_def), intent(in) :: undf_wchi
+ integer(kind=i_def), intent(in) :: ndf_w2trace
+ integer(kind=i_def), intent(in) :: undf_w2vtrace
+ integer(kind=i_def), intent(in) :: ndf_w2htrace
+ real(kind=r_def), dimension(undf_w0), intent(in) :: field_1_w0
+ real(kind=r_def), dimension(undf_w2), intent(in) :: field_3_w2
+ real(kind=r_def), dimension(undf_wtheta), intent(in) :: field_5_wtheta
+ real(kind=r_def), dimension(undf_w2v), intent(in) :: field_7_w2v
+ real(kind=r_def), dimension(undf_wchi), intent(in) :: field_9_wchi
+ real(kind=r_def), dimension(undf_w2vtrace), intent(in) :: field_11_w2vtrace
+ integer(kind=i_def), intent(in) :: cell
+ integer(kind=i_def), intent(in) :: op_2_ncell_3d
+ real(kind=r_def), dimension(op_2_ncell_3d,ndf_w2,ndf_w1), intent(inout) \
+:: op_2
+ integer(kind=i_def), intent(in) :: op_4_ncell_3d
+ real(kind=r_def), dimension(op_4_ncell_3d,ndf_w3,ndf_w3), intent(in) \
+:: op_4
+ integer(kind=i_def), intent(in) :: op_6_ncell_3d
+ real(kind=r_def), dimension(op_6_ncell_3d,ndf_w2h,ndf_w2h), intent(in) \
+:: op_6
+ integer(kind=i_def), intent(in) :: op_8_ncell_3d
+ real(kind=r_def), dimension(op_8_ncell_3d,ndf_w2broken,ndf_w2broken), \
+intent(in) :: op_8
+ integer(kind=i_def), intent(in) :: op_10_ncell_3d
+ real(kind=r_def), dimension(op_10_ncell_3d,ndf_w2trace,ndf_w2trace), \
+intent(in) :: op_10
+ integer(kind=i_def), intent(in) :: op_12_ncell_3d
+ real(kind=r_def), dimension(op_12_ncell_3d,ndf_w2htrace,ndf_w2htrace), \
+intent(in) :: op_12
+ real(kind=r_def), dimension(3,ndf_w0,ndf_w2), intent(in) :: \
+diff_basis_w0_on_w2
+ real(kind=r_def), dimension(3,ndf_w1,ndf_w2), intent(in) :: \
+diff_basis_w1_on_w2
+ real(kind=r_def), dimension(1,ndf_w2,ndf_w2), intent(in) :: \
+diff_basis_w2_on_w2
+ real(kind=r_def), dimension(3,ndf_w3,ndf_w2), intent(in) :: \
+diff_basis_w3_on_w2
+ real(kind=r_def), dimension(3,ndf_wtheta,ndf_w2), intent(in) :: \
+diff_basis_wtheta_on_w2
+ real(kind=r_def), dimension(1,ndf_w2h,ndf_w2), intent(in) :: \
+diff_basis_w2h_on_w2
+ real(kind=r_def), dimension(1,ndf_w2v,ndf_w2), intent(in) :: \
+diff_basis_w2v_on_w2
+ real(kind=r_def), dimension(1,ndf_w2broken,ndf_w2), intent(in) :: \
+diff_basis_w2broken_on_w2
+ real(kind=r_def), dimension(3,ndf_wchi,ndf_w2), intent(in) :: \
+diff_basis_wchi_on_w2
+ real(kind=r_def), dimension(3,ndf_w2trace,ndf_w2), intent(in) :: \
+diff_basis_w2trace_on_w2
+ real(kind=r_def), dimension(3,ndf_w2vtrace,ndf_w2), intent(in) :: \
+diff_basis_w2vtrace_on_w2
+ real(kind=r_def), dimension(3,ndf_w2htrace,ndf_w2), intent(in) :: \
+diff_basis_w2htrace_on_w2
+""" in generated_code
+
+
+def test_2eval_stubgen(fortran_writer):
''' Check that we generate the correct kernel stub when an evaluator is
required on more than one space.
@@ -1931,10 +1993,10 @@ def test_2eval_stubgen():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
+ generated_code = fortran_writer(kernel.gen_stub)
assert (
- "SUBROUTINE dummy_code(cell, nlayers, field_1_w0, op_2_ncell_3d, "
+ "subroutine dummy_code(cell, nlayers, field_1_w0, op_2_ncell_3d, "
"op_2, field_3_w2, op_4_ncell_3d, op_4, field_5_wtheta, "
"op_6_ncell_3d, op_6, field_7_w2v, op_8_ncell_3d, op_8, "
"field_9_wchi, op_10_ncell_3d, op_10, field_11_w2vtrace, "
@@ -1955,59 +2017,107 @@ def test_2eval_stubgen():
"diff_basis_w2vtrace_on_wtheta, ndf_w2htrace, "
"diff_basis_w2htrace_on_w2h, diff_basis_w2htrace_on_wtheta)\n" in
generated_code)
- assert (
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w0\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w0) :: map_w0\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2) :: map_w2\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2v\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2v) "
- ":: map_w2v\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2vtrace\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2vtrace) "
- ":: map_w2vtrace\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wchi\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wchi) "
- ":: map_wchi\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wtheta\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wtheta) "
- ":: map_wtheta\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w0, undf_w2, ndf_w1, "
- "ndf_w3, undf_wtheta, ndf_w2h, undf_w2v, ndf_w2broken, undf_wchi, "
- "ndf_w2trace, undf_w2vtrace, ndf_w2htrace\n" in generated_code)
-
- for space in ["w2h", "wtheta"]:
- assert (f"REAL(KIND=r_def), intent(in), dimension(3,ndf_w0,"
- f"ndf_{space}) :: diff_basis_w0_on_{space}" in generated_code)
- assert (f"REAL(KIND=r_def), intent(in), dimension(1,ndf_w2,"
- f"ndf_{space}) :: diff_basis_w2_on_{space}" in generated_code)
- assert (f"REAL(KIND=r_def), intent(in), dimension(3,ndf_w1,"
- f"ndf_{space}) :: diff_basis_w1_on_{space}" in generated_code)
- assert (f"REAL(KIND=r_def), intent(in), dimension(3,ndf_w3,"
- f"ndf_{space}) :: diff_basis_w3_on_{space}" in generated_code)
- assert (f"REAL(KIND=r_def), intent(in), dimension(3,ndf_wtheta,"
- f"ndf_{space}) :: diff_basis_wtheta_on_{space}" in
- generated_code)
- assert (f"REAL(KIND=r_def), intent(in), dimension(1,ndf_w2h,"
- f"ndf_{space}) :: diff_basis_w2h_on_{space}" in generated_code)
- assert (f"REAL(KIND=r_def), intent(in), dimension(1,ndf_w2v,"
- f"ndf_{space}) :: diff_basis_w2v_on_{space}" in generated_code)
- assert (f"REAL(KIND=r_def), intent(in), dimension(1,ndf_w2broken,"
- f"ndf_{space}) :: diff_basis_w2broken_on_{space}" in
- generated_code)
- assert (f"REAL(KIND=r_def), intent(in), dimension(3,ndf_wchi,"
- f"ndf_{space}) :: diff_basis_wchi_on_{space}" in
- generated_code)
- assert (f"REAL(KIND=r_def), intent(in), dimension(3,ndf_w2trace,"
- f"ndf_{space}) :: diff_basis_w2trace_on_{space}" in
- generated_code)
- assert (f"REAL(KIND=r_def), intent(in), dimension(3,ndf_w2vtrace,"
- f"ndf_{space}) :: diff_basis_w2vtrace_on_{space}" in
- generated_code)
- assert (f"REAL(KIND=r_def), intent(in), dimension(3,ndf_w2htrace,"
- f"ndf_{space}) :: diff_basis_w2htrace_on_{space}" in
- generated_code)
+ assert """\
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w0
+ integer(kind=i_def), dimension(ndf_w0), intent(in) :: map_w0
+ integer(kind=i_def), intent(in) :: ndf_w2
+ integer(kind=i_def), dimension(ndf_w2), intent(in) :: map_w2
+ integer(kind=i_def), intent(in) :: ndf_w2v
+ integer(kind=i_def), dimension(ndf_w2v), intent(in) :: map_w2v
+ integer(kind=i_def), intent(in) :: ndf_w2vtrace
+ integer(kind=i_def), dimension(ndf_w2vtrace), intent(in) :: map_w2vtrace
+ integer(kind=i_def), intent(in) :: ndf_wchi
+ integer(kind=i_def), dimension(ndf_wchi), intent(in) :: map_wchi
+ integer(kind=i_def), intent(in) :: ndf_wtheta
+ integer(kind=i_def), dimension(ndf_wtheta), intent(in) :: map_wtheta
+ integer(kind=i_def), intent(in) :: undf_w0
+ integer(kind=i_def), intent(in) :: undf_w2
+ integer(kind=i_def), intent(in) :: ndf_w1
+ integer(kind=i_def), intent(in) :: ndf_w3
+ integer(kind=i_def), intent(in) :: undf_wtheta
+ integer(kind=i_def), intent(in) :: ndf_w2h
+ integer(kind=i_def), intent(in) :: undf_w2v
+ integer(kind=i_def), intent(in) :: ndf_w2broken
+ integer(kind=i_def), intent(in) :: undf_wchi
+ integer(kind=i_def), intent(in) :: ndf_w2trace
+ integer(kind=i_def), intent(in) :: undf_w2vtrace
+ integer(kind=i_def), intent(in) :: ndf_w2htrace
+ real(kind=r_def), dimension(undf_w0), intent(in) :: field_1_w0
+ real(kind=r_def), dimension(undf_w2), intent(in) :: field_3_w2
+ real(kind=r_def), dimension(undf_wtheta), intent(in) :: field_5_wtheta
+ real(kind=r_def), dimension(undf_w2v), intent(in) :: field_7_w2v
+ real(kind=r_def), dimension(undf_wchi), intent(in) :: field_9_wchi
+ real(kind=r_def), dimension(undf_w2vtrace), intent(in) :: field_11_w2vtrace
+ integer(kind=i_def), intent(in) :: cell
+ integer(kind=i_def), intent(in) :: op_2_ncell_3d
+ real(kind=r_def), dimension(op_2_ncell_3d,ndf_w2,ndf_w1), intent(inout) \
+:: op_2
+ integer(kind=i_def), intent(in) :: op_4_ncell_3d
+ real(kind=r_def), dimension(op_4_ncell_3d,ndf_w3,ndf_w3), intent(in) \
+:: op_4
+ integer(kind=i_def), intent(in) :: op_6_ncell_3d
+ real(kind=r_def), dimension(op_6_ncell_3d,ndf_w2h,ndf_w2h), intent(in) \
+:: op_6
+ integer(kind=i_def), intent(in) :: op_8_ncell_3d
+ real(kind=r_def), dimension(op_8_ncell_3d,ndf_w2broken,ndf_w2broken), \
+intent(in) :: op_8
+ integer(kind=i_def), intent(in) :: op_10_ncell_3d
+ real(kind=r_def), dimension(op_10_ncell_3d,ndf_w2trace,ndf_w2trace), \
+intent(in) :: op_10
+ integer(kind=i_def), intent(in) :: op_12_ncell_3d
+ real(kind=r_def), dimension(op_12_ncell_3d,ndf_w2htrace,ndf_w2htrace), \
+intent(in) :: op_12
+ real(kind=r_def), dimension(3,ndf_w0,ndf_w2h), intent(in) :: \
+diff_basis_w0_on_w2h
+ real(kind=r_def), dimension(3,ndf_w0,ndf_wtheta), intent(in) :: \
+diff_basis_w0_on_wtheta
+ real(kind=r_def), dimension(3,ndf_w1,ndf_w2h), intent(in) :: \
+diff_basis_w1_on_w2h
+ real(kind=r_def), dimension(3,ndf_w1,ndf_wtheta), intent(in) :: \
+diff_basis_w1_on_wtheta
+ real(kind=r_def), dimension(1,ndf_w2,ndf_w2h), intent(in) :: \
+diff_basis_w2_on_w2h
+ real(kind=r_def), dimension(1,ndf_w2,ndf_wtheta), intent(in) :: \
+diff_basis_w2_on_wtheta
+ real(kind=r_def), dimension(3,ndf_w3,ndf_w2h), intent(in) :: \
+diff_basis_w3_on_w2h
+ real(kind=r_def), dimension(3,ndf_w3,ndf_wtheta), intent(in) :: \
+diff_basis_w3_on_wtheta
+ real(kind=r_def), dimension(3,ndf_wtheta,ndf_w2h), intent(in) :: \
+diff_basis_wtheta_on_w2h
+ real(kind=r_def), dimension(3,ndf_wtheta,ndf_wtheta), intent(in) :: \
+diff_basis_wtheta_on_wtheta
+ real(kind=r_def), dimension(1,ndf_w2h,ndf_w2h), intent(in) :: \
+diff_basis_w2h_on_w2h
+ real(kind=r_def), dimension(1,ndf_w2h,ndf_wtheta), intent(in) :: \
+diff_basis_w2h_on_wtheta
+ real(kind=r_def), dimension(1,ndf_w2v,ndf_w2h), intent(in) :: \
+diff_basis_w2v_on_w2h
+ real(kind=r_def), dimension(1,ndf_w2v,ndf_wtheta), intent(in) :: \
+diff_basis_w2v_on_wtheta
+ real(kind=r_def), dimension(1,ndf_w2broken,ndf_w2h), intent(in) :: \
+diff_basis_w2broken_on_w2h
+ real(kind=r_def), dimension(1,ndf_w2broken,ndf_wtheta), intent(in) :: \
+diff_basis_w2broken_on_wtheta
+ real(kind=r_def), dimension(3,ndf_wchi,ndf_w2h), intent(in) :: \
+diff_basis_wchi_on_w2h
+ real(kind=r_def), dimension(3,ndf_wchi,ndf_wtheta), intent(in) :: \
+diff_basis_wchi_on_wtheta
+ real(kind=r_def), dimension(3,ndf_w2trace,ndf_w2h), intent(in) :: \
+diff_basis_w2trace_on_w2h
+ real(kind=r_def), dimension(3,ndf_w2trace,ndf_wtheta), intent(in) :: \
+diff_basis_w2trace_on_wtheta
+ real(kind=r_def), dimension(3,ndf_w2vtrace,ndf_w2h), intent(in) :: \
+diff_basis_w2vtrace_on_w2h
+ real(kind=r_def), dimension(3,ndf_w2vtrace,ndf_wtheta), intent(in) :: \
+diff_basis_w2vtrace_on_wtheta
+ real(kind=r_def), dimension(3,ndf_w2htrace,ndf_w2h), intent(in) :: \
+diff_basis_w2htrace_on_w2h
+ real(kind=r_def), dimension(3,ndf_w2htrace,ndf_wtheta), intent(in) :: \
+diff_basis_w2htrace_on_wtheta
+""" in generated_code
DIFF_BASIS_UNSUPPORTED_SPACE = '''
@@ -2065,7 +2175,7 @@ def test_diff_basis_unsupp_space():
def test_dynbasisfns_unsupp_qr(monkeypatch):
''' Check that the expected error is raised in
- DynBasisFunctions._stub_declarations() if an un-supported quadrature
+ DynBasisFunctions.stub_declarations() if an un-supported quadrature
shape is encountered. '''
ast = fpapi.parse(DIFF_BASIS, ignore_comments=False)
metadata = LFRicKernMetadata(ast)
@@ -2075,7 +2185,7 @@ def test_dynbasisfns_unsupp_qr(monkeypatch):
monkeypatch.setattr(
dbasis, "_qr_vars", {"unsupported-shape": None})
with pytest.raises(InternalError) as err:
- dbasis._stub_declarations(ModuleGen(name="my_mod"))
+ dbasis.stub_declarations()
assert ("Quadrature shapes other than ['gh_quadrature_xyoz', "
"'gh_quadrature_face', 'gh_quadrature_edge'] are not yet "
"supported - got: 'unsupported-shape'" in str(err.value))
diff --git a/src/psyclone/tests/dynamo0p3_cma_test.py b/src/psyclone/tests/dynamo0p3_cma_test.py
index e704e45de6..7f112b2faa 100644
--- a/src/psyclone/tests/dynamo0p3_cma_test.py
+++ b/src/psyclone/tests/dynamo0p3_cma_test.py
@@ -820,25 +820,27 @@ def test_cma_asm(tmpdir, dist_mem):
distributed_memory=dist_mem).create(invoke_info)
code = str(psy.gen)
- output = (
- " USE operator_mod, ONLY: operator_type, operator_proxy_type\n"
- " USE columnwise_operator_mod, ONLY: columnwise_operator_type, "
- "columnwise_operator_proxy_type\n")
- assert output in code
- assert "TYPE(operator_proxy_type) lma_op1_proxy" in code
- assert ("REAL(KIND=r_def), pointer, dimension(:,:,:) :: "
+ assert ("use operator_mod, only : operator_proxy_type, operator_type\n"
+ in code)
+ assert ("use columnwise_operator_mod, only : columnwise_operator_proxy_"
+ "type, columnwise_operator_type\n" in code)
+ assert "type(operator_proxy_type) :: lma_op1_proxy\n" in code
+ assert ("real(kind=r_def), pointer, dimension(:,:,:) :: "
"lma_op1_local_stencil => null()" in code)
- assert "TYPE(columnwise_operator_type), intent(in) :: cma_op1" in code
- assert "TYPE(columnwise_operator_proxy_type) cma_op1_proxy" in code
- assert ("REAL(KIND=r_solver), pointer, dimension(:,:,:) :: "
+ assert ("type(columnwise_operator_type), intent(inout) :: cma_op1"
+ in code)
+ assert "type(columnwise_operator_proxy_type) :: cma_op1_proxy" in code
+ assert ("real(kind=r_solver), pointer, dimension(:,:,:) :: "
"cma_op1_cma_matrix => null()" in code)
- assert "TYPE(mesh_type), pointer :: mesh => null()" in code
- assert "INTEGER(KIND=i_def) ncell_2d" in code
- assert ("INTEGER(KIND=i_def), pointer :: cbanded_map_adspc1_lma_op1(:,:) "
- "=> null(), cbanded_map_adspc2_lma_op1(:,:) => null()") in code
+ assert "type(mesh_type), pointer :: mesh => null()" in code
+ assert "integer(kind=i_def) :: ncell_2d" in code
+ assert ("integer(kind=i_def), pointer :: cbanded_map_adspc1_lma_op1(:,:) "
+ "=> null()") in code
+ assert ("integer(kind=i_def), pointer :: cbanded_map_adspc2_lma_op1(:,:) "
+ "=> null()") in code
assert "ncell_2d = mesh%get_ncells_2d" in code
assert "cma_op1_proxy = cma_op1%get_proxy()" in code
- assert ("CALL columnwise_op_asm_kernel_code(cell, nlayers_lma_op1, "
+ assert ("call columnwise_op_asm_kernel_code(cell, nlayers_lma_op1, "
"ncell_2d, lma_op1_proxy%ncell_3d, lma_op1_local_stencil, "
"cma_op1_cma_matrix(:,:,:), cma_op1_nrow, cma_op1_ncol, "
"cma_op1_bandwidth, cma_op1_alpha, cma_op1_beta, cma_op1_gamma_m, "
@@ -861,23 +863,23 @@ def test_cma_asm_field(tmpdir, dist_mem):
distributed_memory=dist_mem).create(invoke_info)
code = str(psy.gen)
- output = (
- " USE operator_mod, ONLY: operator_type, operator_proxy_type\n"
- " USE columnwise_operator_mod, ONLY: columnwise_operator_type, "
- "columnwise_operator_proxy_type\n")
- assert output in code
- assert "TYPE(operator_proxy_type) lma_op1_proxy" in code
- assert "TYPE(columnwise_operator_type), intent(in) :: cma_op1" in code
- assert "TYPE(columnwise_operator_proxy_type) cma_op1_proxy" in code
- assert ("INTEGER(KIND=i_def), pointer :: "
- "cbanded_map_aspc1_afield(:,:) => null(), "
- "cbanded_map_aspc2_lma_op1(:,:) => null()" in code)
- assert "INTEGER(KIND=i_def) ncell_2d" in code
- assert "mesh => afield_proxy%vspace%get_mesh()" in code
+ assert ("use operator_mod, only : operator_proxy_type, operator_type\n"
+ in code)
+ assert ("use columnwise_operator_mod, only : columnwise_operator_proxy_"
+ "type, columnwise_operator_type\n" in code)
+ assert "type(operator_proxy_type) :: lma_op1_proxy\n" in code
+ assert "type(columnwise_operator_type), intent(inout) :: cma_op1\n" in code
+ assert "type(columnwise_operator_proxy_type) :: cma_op1_proxy\n" in code
+ assert ("integer(kind=i_def), pointer :: "
+ "cbanded_map_aspc1_afield(:,:) => null()\n" in code)
+ assert ("integer(kind=i_def), pointer :: "
+ "cbanded_map_aspc2_lma_op1(:,:) => null()\n" in code)
+ assert "integer(kind=i_def) :: ncell_2d" in code
+ assert "mesh => afield_proxy%vspace%get_mesh()\n" in code
assert "ncell_2d = mesh%get_ncells_2d()" in code
- assert "cma_op1_proxy = cma_op1%get_proxy()" in code
+ assert "cma_op1_proxy = cma_op1%get_proxy()\n" in code
expected = (
- "CALL columnwise_op_asm_field_kernel_code(cell, nlayers_afield, "
+ "call columnwise_op_asm_field_kernel_code(cell, nlayers_afield, "
"ncell_2d, afield_data, lma_op1_proxy%ncell_3d, "
"lma_op1_local_stencil, cma_op1_cma_matrix(:,:,:), cma_op1_nrow, "
"cma_op1_ncol, cma_op1_bandwidth, cma_op1_alpha, cma_op1_beta, "
@@ -905,21 +907,21 @@ def test_cma_asm_scalar(dist_mem, tmpdir):
distributed_memory=dist_mem).create(invoke_info)
code = str(psy.gen)
- output = (
- " USE operator_mod, ONLY: operator_type, operator_proxy_type\n"
- " USE columnwise_operator_mod, ONLY: columnwise_operator_type, "
- "columnwise_operator_proxy_type\n")
- assert output in code
- assert "TYPE(operator_proxy_type) lma_op1_proxy" in code
- assert "TYPE(columnwise_operator_type), intent(in) :: cma_op1" in code
- assert "TYPE(columnwise_operator_proxy_type) cma_op1_proxy" in code
- assert ("INTEGER(KIND=i_def), pointer :: "
- "cbanded_map_aspc1_lma_op1(:,:) => null(), "
+ assert ("use operator_mod, only : operator_proxy_type, operator_type\n"
+ in code)
+ assert ("use columnwise_operator_mod, only : columnwise_operator_proxy_"
+ "type, columnwise_operator_type\n" in code)
+ assert "type(operator_proxy_type) :: lma_op1_proxy" in code
+ assert "type(columnwise_operator_type), intent(inout) :: cma_op1" in code
+ assert "type(columnwise_operator_proxy_type) :: cma_op1_proxy" in code
+ assert ("integer(kind=i_def), pointer :: "
+ "cbanded_map_aspc1_lma_op1(:,:) => null()" in code)
+ assert ("integer(kind=i_def), pointer :: "
"cbanded_map_aspc2_lma_op1(:,:) => null()" in code)
- assert "INTEGER(KIND=i_def) ncell_2d" in code
+ assert "integer(kind=i_def) :: ncell_2d" in code
assert "ncell_2d = mesh%get_ncells_2d()" in code
assert "cma_op1_proxy = cma_op1%get_proxy()" in code
- expected = ("CALL columnwise_op_asm_kernel_scalar_code(cell, "
+ expected = ("call columnwise_op_asm_kernel_scalar_code(cell, "
"nlayers_lma_op1, ncell_2d, lma_op1_proxy%ncell_3d, "
"lma_op1_local_stencil, cma_op1_cma_matrix(:,:,:), "
"cma_op1_nrow, cma_op1_ncol, cma_op1_bandwidth, "
@@ -948,18 +950,17 @@ def test_cma_asm_field_same_fs(dist_mem, tmpdir):
distributed_memory=dist_mem).create(invoke_info)
code = str(psy.gen)
- output = (
- " USE operator_mod, ONLY: operator_type, operator_proxy_type\n"
- " USE columnwise_operator_mod, ONLY: columnwise_operator_type, "
- "columnwise_operator_proxy_type\n")
- assert output in code
- assert "TYPE(operator_proxy_type) lma_op1_proxy" in code
- assert ("TYPE(columnwise_operator_type), intent(in) :: cma_op1"
+ assert ("use operator_mod, only : operator_proxy_type, operator_type\n"
+ in code)
+ assert ("use columnwise_operator_mod, only : columnwise_operator_proxy_"
+ "type, columnwise_operator_type\n" in code)
+ assert "type(operator_proxy_type) :: lma_op1_proxy" in code
+ assert ("type(columnwise_operator_type), intent(inout) :: cma_op1"
in code)
- assert "TYPE(columnwise_operator_proxy_type) cma_op1_proxy" in code
- assert ("INTEGER(KIND=i_def), pointer :: "
+ assert "type(columnwise_operator_proxy_type) :: cma_op1_proxy" in code
+ assert ("integer(kind=i_def), pointer :: "
"cbanded_map_aspc2_lma_op1(:,:) => null()\n" in code)
- assert "INTEGER(KIND=i_def) ncell_2d" in code
+ assert "integer(kind=i_def) :: ncell_2d" in code
assert "mesh => lma_op1_proxy%fs_from%get_mesh()" in code
assert "ncell_2d = mesh%get_ncells_2d()" in code
assert "cma_op1_proxy = cma_op1%get_proxy()" in code
@@ -969,8 +970,8 @@ def test_cma_asm_field_same_fs(dist_mem, tmpdir):
assert "loop0_stop = mesh%get_last_halo_cell(1)\n" in code
else:
assert "loop0_stop = cma_op1_proxy%fs_from%get_ncell()\n" in code
- assert "DO cell = loop0_start, loop0_stop, 1\n" in code
- expected = ("CALL columnwise_op_asm_same_fs_kernel_code(cell, "
+ assert "do cell = loop0_start, loop0_stop, 1\n" in code
+ expected = ("call columnwise_op_asm_same_fs_kernel_code(cell, "
"nlayers_lma_op1, ncell_2d, lma_op1_proxy%ncell_3d, "
"lma_op1_local_stencil, afield_data, "
"cma_op1_cma_matrix(:,:,:), cma_op1_nrow, cma_op1_bandwidth, "
@@ -997,21 +998,22 @@ def test_cma_apply(tmpdir, dist_mem):
distributed_memory=dist_mem).create(invoke_info)
code = str(psy.gen)
- assert "INTEGER(KIND=i_def) ncell_2d" in code
- assert "TYPE(columnwise_operator_proxy_type) cma_op1_proxy" in code
+ assert "integer(kind=i_def) :: ncell_2d" in code
+ assert "type(columnwise_operator_proxy_type) :: cma_op1_proxy" in code
assert "mesh => field_a_proxy%vspace%get_mesh()" in code
assert "ncell_2d = mesh%get_ncells_2d()" in code
- assert ("INTEGER(KIND=i_def), pointer :: cma_indirection_map_aspc1_"
- "field_a(:) => null(), "
+ assert ("integer(kind=i_def), pointer :: cma_indirection_map_aspc1_"
+ "field_a(:) => null()" in code)
+ assert ("integer(kind=i_def), pointer :: "
"cma_indirection_map_aspc2_field_b(:) => null()\n") in code
assert ("ndf_aspc1_field_a = field_a_proxy%vspace%get_ndf()\n"
- " undf_aspc1_field_a = field_a_proxy%vspace%"
+ " undf_aspc1_field_a = field_a_proxy%vspace%"
"get_undf()") in code
assert ("cma_indirection_map_aspc1_field_a => "
"cma_op1_proxy%indirection_dofmap_to") in code
assert ("cma_indirection_map_aspc2_field_b => "
"cma_op1_proxy%indirection_dofmap_from") in code
- assert ("CALL columnwise_op_app_kernel_code(cell, ncell_2d, "
+ assert ("call columnwise_op_app_kernel_code(cell, ncell_2d, "
"field_a_data, field_b_data, cma_op1_cma_matrix(:,:,:), "
"cma_op1_nrow, cma_op1_ncol, cma_op1_bandwidth, cma_op1_alpha, "
"cma_op1_beta, cma_op1_gamma_m, cma_op1_gamma_p, "
@@ -1040,25 +1042,27 @@ def test_cma_apply_discontinuous_spaces(tmpdir, dist_mem):
code = str(psy.gen)
# Check any_discontinuous_space_1
- assert "INTEGER(KIND=i_def) ncell_2d" in code
- assert "TYPE(columnwise_operator_proxy_type) cma_op1_proxy" in code
- assert ("INTEGER(KIND=i_def), pointer :: "
- "cma_indirection_map_adspc1_field_a(:) => null(), "
+ assert "integer(kind=i_def) :: ncell_2d" in code
+ assert "type(columnwise_operator_proxy_type) :: cma_op1_proxy" in code
+ assert ("integer(kind=i_def), pointer :: "
+ "cma_indirection_map_adspc1_field_a(:) => null()") in code
+ assert ("integer(kind=i_def), pointer :: "
"cma_indirection_map_aspc1_field_b(:) => null()\n") in code
assert ("ndf_adspc1_field_a = field_a_proxy%vspace%get_ndf()\n"
- " undf_adspc1_field_a = "
+ " undf_adspc1_field_a = "
"field_a_proxy%vspace%get_undf()") in code
assert ("cma_indirection_map_adspc1_field_a => "
"cma_op1_proxy%indirection_dofmap_to") in code
# Check w2v
- assert "TYPE(columnwise_operator_proxy_type) cma_op2_proxy" in code
+ assert "type(columnwise_operator_proxy_type) :: cma_op2_proxy" in code
assert "mesh => field_a_proxy%vspace%get_mesh()" in code
assert "ncell_2d = mesh%get_ncells_2d()" in code
- assert ("INTEGER(KIND=i_def), pointer :: "
- "cma_indirection_map_w2v(:) => null(), "
+ assert ("integer(kind=i_def), pointer :: "
+ "cma_indirection_map_w2v(:) => null()") in code
+ assert ("integer(kind=i_def), pointer :: "
"cma_indirection_map_aspc2_field_d(:) => null()\n") in code
assert ("ndf_w2v = field_c_proxy%vspace%get_ndf()\n"
- " undf_w2v = field_c_proxy%vspace%get_undf()") in code
+ " undf_w2v = field_c_proxy%vspace%get_undf()") in code
assert ("cma_indirection_map_w2v => "
"cma_op2_proxy%indirection_dofmap_to") in code
if dist_mem:
@@ -1071,7 +1075,7 @@ def test_cma_apply_discontinuous_spaces(tmpdir, dist_mem):
assert "loop0_stop = field_c_proxy%vspace%get_ncell()" in code
# Check any_discontinuous_space_1
- assert ("CALL columnwise_op_app_anydspace_kernel_code(cell, "
+ assert ("call columnwise_op_app_anydspace_kernel_code(cell, "
"ncell_2d, field_a_data, field_b_data, "
"cma_op1_cma_matrix(:,:,:), cma_op1_nrow, cma_op1_ncol, "
"cma_op1_bandwidth, cma_op1_alpha, cma_op1_beta, "
@@ -1081,7 +1085,7 @@ def test_cma_apply_discontinuous_spaces(tmpdir, dist_mem):
"undf_aspc1_field_b, map_aspc1_field_b(:,cell), "
"cma_indirection_map_aspc1_field_b") in code
# Check w2v
- assert ("CALL columnwise_op_app_w2v_kernel_code(cell, ncell_2d, "
+ assert ("call columnwise_op_app_w2v_kernel_code(cell, ncell_2d, "
"field_c_data, field_d_data, cma_op2_cma_matrix(:,:,:), "
"cma_op2_nrow, cma_op2_ncol, cma_op2_bandwidth, cma_op2_alpha, "
"cma_op2_beta, cma_op2_gamma_m, cma_op2_gamma_p, ndf_w2v, "
@@ -1091,10 +1095,10 @@ def test_cma_apply_discontinuous_spaces(tmpdir, dist_mem):
if dist_mem:
# Check any_discontinuous_space_1
- assert "CALL field_a_proxy%set_dirty()" in code
+ assert "call field_a_proxy%set_dirty()" in code
assert "cma_op1_proxy%is_dirty(" not in code
# Check w2v
- assert "CALL field_c_proxy%set_dirty()" in code
+ assert "call field_c_proxy%set_dirty()" in code
assert "cma_op2_proxy%is_dirty(" not in code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -1114,18 +1118,18 @@ def test_cma_apply_same_space(dist_mem, tmpdir):
distributed_memory=dist_mem).create(invoke_info)
code = str(psy.gen)
- assert "INTEGER(KIND=i_def) ncell_2d" in code
- assert "TYPE(columnwise_operator_proxy_type) cma_op1_proxy" in code
+ assert "integer(kind=i_def) :: ncell_2d" in code
+ assert "type(columnwise_operator_proxy_type) :: cma_op1_proxy" in code
assert "mesh => field_a_proxy%vspace%get_mesh()" in code
assert "ncell_2d = mesh%get_ncells_2d()" in code
- assert ("INTEGER(KIND=i_def), pointer :: cma_indirection_map_aspc2_"
+ assert ("integer(kind=i_def), pointer :: cma_indirection_map_aspc2_"
"field_a(:) => null()\n") in code
assert ("ndf_aspc2_field_a = field_a_proxy%vspace%get_ndf()\n"
- " undf_aspc2_field_a = field_a_proxy%vspace%"
+ " undf_aspc2_field_a = field_a_proxy%vspace%"
"get_undf()") in code
assert ("cma_indirection_map_aspc2_field_a => "
"cma_op1_proxy%indirection_dofmap_to") in code
- assert ("CALL columnwise_op_app_same_fs_kernel_code(cell, ncell_2d, "
+ assert ("call columnwise_op_app_same_fs_kernel_code(cell, ncell_2d, "
"field_a_data, field_b_data, "
"cma_op1_cma_matrix(:,:,:), cma_op1_nrow, "
"cma_op1_bandwidth, cma_op1_alpha, "
@@ -1134,7 +1138,7 @@ def test_cma_apply_same_space(dist_mem, tmpdir):
"map_aspc2_field_a(:,cell), "
"cma_indirection_map_aspc2_field_a)") in code
if dist_mem:
- assert "CALL field_a_proxy%set_dirty()" in code
+ assert "call field_a_proxy%set_dirty()" in code
assert "cma_op1_proxy%is_dirty(" not in code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -1151,7 +1155,7 @@ def test_cma_matrix_matrix(tmpdir, dist_mem):
distributed_memory=dist_mem).create(invoke_info)
code = str(psy.gen)
- assert "INTEGER(KIND=i_def) ncell_2d" in code
+ assert "integer(kind=i_def) :: ncell_2d" in code
assert "mesh => cma_opa_proxy%fs_from%get_mesh()" in code
assert "ncell_2d = mesh%get_ncells_2d()" in code
@@ -1162,7 +1166,7 @@ def test_cma_matrix_matrix(tmpdir, dist_mem):
else:
assert "loop0_stop = cma_opc_proxy%fs_from%get_ncell()\n" in code
- assert ("CALL columnwise_op_mul_kernel_code(cell, "
+ assert ("call columnwise_op_mul_kernel_code(cell, "
"ncell_2d, "
"cma_opa_cma_matrix(:,:,:), cma_opa_nrow, cma_opa_ncol, "
"cma_opa_bandwidth, cma_opa_alpha, "
@@ -1190,7 +1194,7 @@ def test_cma_matrix_matrix_2scalars(tmpdir, dist_mem):
psy = PSyFactory(TEST_API,
distributed_memory=dist_mem).create(invoke_info)
code = str(psy.gen)
- assert "INTEGER(KIND=i_def) ncell_2d" in code
+ assert "integer(kind=i_def) :: ncell_2d" in code
assert "mesh => cma_opa_proxy%fs_from%get_mesh()" in code
assert "ncell_2d = mesh%get_ncells_2d()" in code
@@ -1201,7 +1205,7 @@ def test_cma_matrix_matrix_2scalars(tmpdir, dist_mem):
else:
assert "loop0_stop = cma_opc_proxy%fs_from%get_ncell()\n" in code
- assert ("CALL columnwise_op_mul_2scalars_kernel_code(cell, "
+ assert ("call columnwise_op_mul_2scalars_kernel_code(cell, "
"ncell_2d, "
"cma_opa_cma_matrix(:,:,:), cma_opa_nrow, cma_opa_ncol, "
"cma_opa_bandwidth, cma_opa_alpha, "
@@ -1232,17 +1236,17 @@ def test_cma_multi_kernel(tmpdir, dist_mem):
distributed_memory=dist_mem).create(invoke_info)
code = str(psy.gen)
- assert (" afield_proxy = afield%get_proxy()\n"
- " afield_data => afield_proxy%data\n"
- " lma_op1_proxy = lma_op1%get_proxy()\n"
- " lma_op1_local_stencil => lma_op1_proxy%local_stencil\n"
- " cma_op1_proxy = cma_op1%get_proxy()\n"
- " field_a_proxy = field_a%get_proxy()\n"
- " field_a_data => field_a_proxy%data\n"
- " field_b_proxy = field_b%get_proxy()\n"
- " field_b_data => field_b_proxy%data\n"
- " cma_opb_proxy = cma_opb%get_proxy()\n"
- " cma_opc_proxy = cma_opc%get_proxy()\n") in code
+ assert (" afield_proxy = afield%get_proxy()\n"
+ " afield_data => afield_proxy%data\n"
+ " lma_op1_proxy = lma_op1%get_proxy()\n"
+ " lma_op1_local_stencil => lma_op1_proxy%local_stencil\n"
+ " cma_op1_proxy = cma_op1%get_proxy()\n"
+ " field_a_proxy = field_a%get_proxy()\n"
+ " field_a_data => field_a_proxy%data\n"
+ " field_b_proxy = field_b%get_proxy()\n"
+ " field_b_data => field_b_proxy%data\n"
+ " cma_opb_proxy = cma_opb%get_proxy()\n"
+ " cma_opc_proxy = cma_opc%get_proxy()\n") in code
assert "cma_op1_cma_matrix => cma_op1_proxy%columnwise_matrix\n" in code
assert "cma_op1_ncol = cma_op1_proxy%ncol\n" in code
@@ -1251,13 +1255,13 @@ def test_cma_multi_kernel(tmpdir, dist_mem):
assert "cma_op1_alpha = cma_op1_proxy%alpha\n" in code
assert "cma_op1_beta = cma_op1_proxy%beta\n" in code
- assert (" cbanded_map_aspc1_afield => "
+ assert (" cbanded_map_aspc1_afield => "
"cma_op1_proxy%column_banded_dofmap_to\n"
- " cbanded_map_aspc2_lma_op1 => "
+ " cbanded_map_aspc2_lma_op1 => "
"cma_op1_proxy%column_banded_dofmap_from\n") in code
assert ("cma_indirection_map_aspc1_field_a => "
"cma_op1_proxy%indirection_dofmap_to\n"
- " cma_indirection_map_aspc2_field_b => "
+ " cma_indirection_map_aspc2_field_b => "
"cma_op1_proxy%indirection_dofmap_from\n") in code
if dist_mem:
@@ -1267,14 +1271,14 @@ def test_cma_multi_kernel(tmpdir, dist_mem):
# the worst and also loop out to L1 for it too.
assert code.count("_stop = mesh%get_last_halo_cell(1)\n") == 3
else:
- assert (" loop0_stop = cma_op1_proxy%fs_from%get_ncell()\n"
- " loop1_start = 1\n"
- " loop1_stop = field_a_proxy%vspace%get_ncell()\n"
- " loop2_start = 1\n"
- " loop2_stop = cma_opc_proxy%fs_from%get_ncell()\n"
+ assert (" loop0_stop = cma_op1_proxy%fs_from%get_ncell()\n"
+ " loop1_start = 1\n"
+ " loop1_stop = field_a_proxy%vspace%get_ncell()\n"
+ " loop2_start = 1\n"
+ " loop2_stop = cma_opc_proxy%fs_from%get_ncell()\n"
in code)
- assert ("CALL columnwise_op_asm_field_kernel_code(cell, nlayers_afield, "
+ assert ("call columnwise_op_asm_field_kernel_code(cell, nlayers_afield, "
"ncell_2d, afield_data, lma_op1_proxy%ncell_3d, "
"lma_op1_local_stencil, cma_op1_cma_matrix(:,:,:), cma_op1_nrow, "
"cma_op1_ncol, cma_op1_bandwidth, cma_op1_alpha, cma_op1_beta, "
@@ -1282,7 +1286,7 @@ def test_cma_multi_kernel(tmpdir, dist_mem):
"undf_aspc1_afield, map_aspc1_afield(:,cell), "
"cbanded_map_aspc1_afield, ndf_aspc2_lma_op1, "
"cbanded_map_aspc2_lma_op1)") in code
- assert ("CALL columnwise_op_app_kernel_code(cell, ncell_2d, "
+ assert ("call columnwise_op_app_kernel_code(cell, ncell_2d, "
"field_a_data, field_b_data, cma_op1_cma_matrix(:,:,:), "
"cma_op1_nrow, cma_op1_ncol, cma_op1_bandwidth, cma_op1_alpha, "
"cma_op1_beta, cma_op1_gamma_m, cma_op1_gamma_p, "
@@ -1291,7 +1295,7 @@ def test_cma_multi_kernel(tmpdir, dist_mem):
"ndf_aspc2_field_b, undf_aspc2_field_b, "
"map_aspc2_field_b(:,cell), "
"cma_indirection_map_aspc2_field_b)\n") in code
- assert ("CALL columnwise_op_mul_kernel_code(cell, ncell_2d, "
+ assert ("call columnwise_op_mul_kernel_code(cell, ncell_2d, "
"cma_op1_cma_matrix(:,:,:), cma_op1_nrow, cma_op1_ncol, "
"cma_op1_bandwidth, cma_op1_alpha, cma_op1_beta, cma_op1_gamma_m, "
"cma_op1_gamma_p, "
@@ -1314,36 +1318,46 @@ def test_cma_asm_stub_gen():
path = os.path.join(BASE_PATH, "columnwise_op_asm_kernel_mod.F90")
result = generate(path, api=TEST_API)
- expected = (
- " MODULE columnwise_op_asm_kernel_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE columnwise_op_asm_kernel_code(cell, nlayers, "
- "ncell_2d, op_1_ncell_3d, op_1, cma_op_2, cma_op_2_nrow, "
- "cma_op_2_ncol, cma_op_2_bandwidth, cma_op_2_alpha, cma_op_2_beta, "
- "cma_op_2_gamma_m, cma_op_2_gamma_p, ndf_adspc1_op_1, "
- "cbanded_map_adspc1_op_1, ndf_adspc2_op_1, cbanded_map_adspc2_op_1)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_adspc1_op_1\n"
- " INTEGER(KIND=i_def), intent(in), dimension("
- "ndf_adspc1_op_1,nlayers) :: cbanded_map_adspc1_op_1\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_adspc2_op_1\n"
- " INTEGER(KIND=i_def), intent(in), dimension("
- "ndf_adspc2_op_1,nlayers) :: cbanded_map_adspc2_op_1\n"
- " INTEGER(KIND=i_def), intent(in) :: cell, ncell_2d\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_2_nrow, "
- "cma_op_2_ncol, cma_op_2_bandwidth, cma_op_2_alpha, cma_op_2_beta, "
- "cma_op_2_gamma_m, cma_op_2_gamma_p\n"
- " REAL(KIND=r_solver), intent(inout), dimension("
- "cma_op_2_bandwidth,cma_op_2_nrow,ncell_2d) :: cma_op_2\n"
- " INTEGER(KIND=i_def), intent(in) :: op_1_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_1_ncell_3d,"
- "ndf_adspc1_op_1,ndf_adspc2_op_1) :: op_1\n"
- " END SUBROUTINE columnwise_op_asm_kernel_code\n"
- " END MODULE columnwise_op_asm_kernel_mod")
- assert expected in str(result)
+ expected = """\
+module columnwise_op_asm_kernel_mod
+ implicit none
+ public
+
+ contains
+ subroutine columnwise_op_asm_kernel_code(cell, nlayers, ncell_2d, \
+op_1_ncell_3d, op_1, cma_op_2, cma_op_2_nrow, cma_op_2_ncol, \
+cma_op_2_bandwidth, cma_op_2_alpha, cma_op_2_beta, cma_op_2_gamma_m, \
+cma_op_2_gamma_p, ndf_adspc1_op_1, cbanded_map_adspc1_op_1, ndf_adspc2_op_1, \
+cbanded_map_adspc2_op_1)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_adspc1_op_1
+ integer(kind=i_def), dimension(ndf_adspc1_op_1,nlayers), intent(in) :: \
+cbanded_map_adspc1_op_1
+ integer(kind=i_def), intent(in) :: ndf_adspc2_op_1
+ integer(kind=i_def), dimension(ndf_adspc2_op_1,nlayers), intent(in) :: \
+cbanded_map_adspc2_op_1
+ integer(kind=i_def), intent(in) :: cell
+ integer(kind=i_def), intent(in) :: ncell_2d
+ integer(kind=i_def), intent(in) :: cma_op_2_nrow
+ integer(kind=i_def), intent(in) :: cma_op_2_ncol
+ integer(kind=i_def), intent(in) :: cma_op_2_bandwidth
+ integer(kind=i_def), intent(in) :: cma_op_2_alpha
+ integer(kind=i_def), intent(in) :: cma_op_2_beta
+ integer(kind=i_def), intent(in) :: cma_op_2_gamma_m
+ integer(kind=i_def), intent(in) :: cma_op_2_gamma_p
+ real(kind=r_def), dimension(cma_op_2_bandwidth,cma_op_2_nrow,ncell_2d)\
+, intent(inout) :: cma_op_2
+ integer(kind=i_def), intent(in) :: op_1_ncell_3d
+ real(kind=r_def), dimension(op_1_ncell_3d,ndf_adspc1_op_1,ndf_adspc2_op_1)\
+, intent(in) :: op_1
+
+
+ end subroutine columnwise_op_asm_kernel_code
+
+end module columnwise_op_asm_kernel_mod
+"""
+ assert expected == result
def test_cma_asm_with_field_stub_gen():
@@ -1355,42 +1369,52 @@ def test_cma_asm_with_field_stub_gen():
"columnwise_op_asm_field_kernel_mod.F90"),
api=TEST_API)
- expected = (
- " MODULE columnwise_op_asm_field_kernel_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE columnwise_op_asm_field_kernel_code(cell, nlayers, "
- "ncell_2d, field_1_aspc1_field_1, op_2_ncell_3d, op_2, cma_op_3, "
- "cma_op_3_nrow, cma_op_3_ncol, cma_op_3_bandwidth, cma_op_3_alpha, "
- "cma_op_3_beta, cma_op_3_gamma_m, cma_op_3_gamma_p, "
- "ndf_aspc1_field_1, undf_aspc1_field_1, map_aspc1_field_1, "
- "cbanded_map_aspc1_field_1, ndf_aspc2_op_2, cbanded_map_aspc2_op_2)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_aspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc1_field_1) :: map_aspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc1_field_1,nlayers) :: cbanded_map_aspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_aspc2_op_2\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc2_op_2,nlayers) :: cbanded_map_aspc2_op_2\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_aspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in) :: cell, ncell_2d\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_3_nrow, "
- "cma_op_3_ncol, cma_op_3_bandwidth, cma_op_3_alpha, cma_op_3_beta, "
- "cma_op_3_gamma_m, cma_op_3_gamma_p\n"
- " REAL(KIND=r_solver), intent(inout), dimension("
- "cma_op_3_bandwidth,cma_op_3_nrow,ncell_2d) :: cma_op_3\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_aspc1_field_1) :: "
- "field_1_aspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in) :: op_2_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension("
- "op_2_ncell_3d,ndf_aspc1_field_1,ndf_aspc2_op_2) :: op_2\n"
- " END SUBROUTINE columnwise_op_asm_field_kernel_code\n"
- " END MODULE columnwise_op_asm_field_kernel_mod")
- assert expected in str(result)
+ expected = """\
+module columnwise_op_asm_field_kernel_mod
+ implicit none
+ public
+
+ contains
+ subroutine columnwise_op_asm_field_kernel_code(cell, nlayers, ncell_2d, \
+field_1_aspc1_field_1, op_2_ncell_3d, op_2, cma_op_3, cma_op_3_nrow, \
+cma_op_3_ncol, cma_op_3_bandwidth, cma_op_3_alpha, cma_op_3_beta, \
+cma_op_3_gamma_m, cma_op_3_gamma_p, ndf_aspc1_field_1, undf_aspc1_field_1, \
+map_aspc1_field_1, cbanded_map_aspc1_field_1, ndf_aspc2_op_2, \
+cbanded_map_aspc2_op_2)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_aspc1_field_1
+ integer(kind=i_def), dimension(ndf_aspc1_field_1), intent(in) :: \
+map_aspc1_field_1
+ integer(kind=i_def), dimension(ndf_aspc1_field_1,nlayers), intent(in) :: \
+cbanded_map_aspc1_field_1
+ integer(kind=i_def), intent(in) :: ndf_aspc2_op_2
+ integer(kind=i_def), dimension(ndf_aspc2_op_2,nlayers), intent(in) :: \
+cbanded_map_aspc2_op_2
+ integer(kind=i_def), intent(in) :: undf_aspc1_field_1
+ integer(kind=i_def), intent(in) :: cell
+ integer(kind=i_def), intent(in) :: ncell_2d
+ integer(kind=i_def), intent(in) :: cma_op_3_nrow
+ integer(kind=i_def), intent(in) :: cma_op_3_ncol
+ integer(kind=i_def), intent(in) :: cma_op_3_bandwidth
+ integer(kind=i_def), intent(in) :: cma_op_3_alpha
+ integer(kind=i_def), intent(in) :: cma_op_3_beta
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_m
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_p
+ real(kind=r_def), dimension(cma_op_3_bandwidth,cma_op_3_nrow,ncell_2d)\
+, intent(inout) :: cma_op_3
+ real(kind=r_def), dimension(undf_aspc1_field_1), intent(in) :: \
+field_1_aspc1_field_1
+ integer(kind=i_def), intent(in) :: op_2_ncell_3d
+ real(kind=r_def), dimension(op_2_ncell_3d,ndf_aspc1_field_1,\
+ndf_aspc2_op_2), intent(in) :: op_2
+
+
+ end subroutine columnwise_op_asm_field_kernel_code
+
+end module columnwise_op_asm_field_kernel_mod
+"""
+ assert expected == result
def test_cma_asm_same_fs_stub_gen():
@@ -1402,37 +1426,48 @@ def test_cma_asm_same_fs_stub_gen():
"columnwise_op_asm_same_fs_kernel_mod.F90"),
api=TEST_API)
- expected = (
- " MODULE columnwise_op_asm_same_fs_kernel_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE columnwise_op_asm_same_fs_kernel_code(cell, nlayers, "
- "ncell_2d, op_1_ncell_3d, op_1, field_2_aspc1_op_1, cma_op_3, "
- "cma_op_3_nrow, cma_op_3_bandwidth, cma_op_3_alpha, cma_op_3_beta, "
- "cma_op_3_gamma_m, cma_op_3_gamma_p, ndf_aspc1_op_1, undf_aspc1_op_1, "
- "map_aspc1_op_1, ndf_aspc2_op_1, cbanded_map_aspc2_op_1)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_aspc1_op_1\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc1_op_1) :: map_aspc1_op_1\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_aspc2_op_1\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc2_op_1,nlayers) :: cbanded_map_aspc2_op_1\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_aspc1_op_1\n"
- " INTEGER(KIND=i_def), intent(in) :: cell, ncell_2d\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_3_nrow, "
- "cma_op_3_bandwidth, cma_op_3_alpha, cma_op_3_beta, "
- "cma_op_3_gamma_m, cma_op_3_gamma_p\n"
- " REAL(KIND=r_solver), intent(inout), dimension("
- "cma_op_3_bandwidth,cma_op_3_nrow,ncell_2d) :: cma_op_3\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_aspc1_op_1) :: "
- "field_2_aspc1_op_1\n"
- " INTEGER(KIND=i_def), intent(in) :: op_1_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_1_ncell_3d,"
- "ndf_aspc1_op_1,ndf_aspc2_op_1) :: op_1\n")
- assert expected in str(result)
+ expected = """\
+module columnwise_op_asm_same_fs_kernel_mod
+ implicit none
+ public
+
+ contains
+ subroutine columnwise_op_asm_same_fs_kernel_code(cell, nlayers, ncell_2d, \
+op_1_ncell_3d, op_1, field_2_aspc1_op_1, cma_op_3, cma_op_3_nrow, \
+cma_op_3_bandwidth, cma_op_3_alpha, cma_op_3_beta, cma_op_3_gamma_m, \
+cma_op_3_gamma_p, ndf_aspc1_op_1, undf_aspc1_op_1, map_aspc1_op_1, \
+ndf_aspc2_op_1, cbanded_map_aspc2_op_1)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_aspc1_op_1
+ integer(kind=i_def), dimension(ndf_aspc1_op_1), intent(in) :: \
+map_aspc1_op_1
+ integer(kind=i_def), intent(in) :: ndf_aspc2_op_1
+ integer(kind=i_def), dimension(ndf_aspc2_op_1,nlayers), intent(in) :: \
+cbanded_map_aspc2_op_1
+ integer(kind=i_def), intent(in) :: undf_aspc1_op_1
+ integer(kind=i_def), intent(in) :: cell
+ integer(kind=i_def), intent(in) :: ncell_2d
+ integer(kind=i_def), intent(in) :: cma_op_3_nrow
+ integer(kind=i_def), intent(in) :: cma_op_3_bandwidth
+ integer(kind=i_def), intent(in) :: cma_op_3_alpha
+ integer(kind=i_def), intent(in) :: cma_op_3_beta
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_m
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_p
+ real(kind=r_def), dimension(cma_op_3_bandwidth,cma_op_3_nrow,ncell_2d)\
+, intent(inout) :: cma_op_3
+ real(kind=r_def), dimension(undf_aspc1_op_1), intent(in) :: \
+field_2_aspc1_op_1
+ integer(kind=i_def), intent(in) :: op_1_ncell_3d
+ real(kind=r_def), dimension(op_1_ncell_3d,ndf_aspc1_op_1,ndf_aspc2_op_1)\
+, intent(in) :: op_1
+
+
+ end subroutine columnwise_op_asm_same_fs_kernel_code
+
+end module columnwise_op_asm_same_fs_kernel_mod
+"""
+ assert expected == result
def test_cma_app_stub_gen():
@@ -1444,46 +1479,53 @@ def test_cma_app_stub_gen():
"columnwise_op_app_kernel_mod.F90"),
api=TEST_API)
- expected = (
- " MODULE columnwise_op_app_kernel_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE columnwise_op_app_kernel_code(cell, ncell_2d, "
- "field_1_aspc1_field_1, field_2_aspc2_field_2, cma_op_3, "
- "cma_op_3_nrow, cma_op_3_ncol, cma_op_3_bandwidth, cma_op_3_alpha, "
- "cma_op_3_beta, cma_op_3_gamma_m, cma_op_3_gamma_p, "
- "ndf_aspc1_field_1, undf_aspc1_field_1, map_aspc1_field_1, "
- "cma_indirection_map_aspc1_field_1, ndf_aspc2_field_2, "
- "undf_aspc2_field_2, map_aspc2_field_2, "
- "cma_indirection_map_aspc2_field_2)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_aspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc1_field_1) :: map_aspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_aspc2_field_2\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc2_field_2) :: map_aspc2_field_2\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_3_nrow\n"
- " INTEGER(KIND=i_def), intent(in), dimension(cma_op_3_nrow) :: "
- "cma_indirection_map_aspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_3_ncol\n"
- " INTEGER(KIND=i_def), intent(in), dimension(cma_op_3_ncol) :: "
- "cma_indirection_map_aspc2_field_2\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_aspc1_field_1, "
- "undf_aspc2_field_2\n"
- " INTEGER(KIND=i_def), intent(in) :: cell, ncell_2d\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_3_bandwidth, "
- "cma_op_3_alpha, cma_op_3_beta, cma_op_3_gamma_m, cma_op_3_gamma_p\n"
- " REAL(KIND=r_solver), intent(in), dimension(cma_op_3_bandwidth,"
- "cma_op_3_nrow,ncell_2d) :: cma_op_3\n"
- " REAL(KIND=r_def), intent(inout), "
- "dimension(undf_aspc1_field_1) :: field_1_aspc1_field_1\n"
- " REAL(KIND=r_def), intent(in), "
- "dimension(undf_aspc2_field_2) :: field_2_aspc2_field_2\n"
- " END SUBROUTINE columnwise_op_app_kernel_code\n"
- " END MODULE columnwise_op_app_kernel_mod")
- assert expected in str(result)
+ expected = """\
+module columnwise_op_app_kernel_mod
+ implicit none
+ public
+
+ contains
+ subroutine columnwise_op_app_kernel_code(cell, ncell_2d, \
+field_1_aspc1_field_1, field_2_aspc2_field_2, cma_op_3, cma_op_3_nrow, \
+cma_op_3_ncol, cma_op_3_bandwidth, cma_op_3_alpha, cma_op_3_beta, \
+cma_op_3_gamma_m, cma_op_3_gamma_p, ndf_aspc1_field_1, undf_aspc1_field_1, \
+map_aspc1_field_1, cma_indirection_map_aspc1_field_1, ndf_aspc2_field_2, \
+undf_aspc2_field_2, map_aspc2_field_2, cma_indirection_map_aspc2_field_2)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: ndf_aspc1_field_1
+ integer(kind=i_def), dimension(ndf_aspc1_field_1), intent(in) :: \
+map_aspc1_field_1
+ integer(kind=i_def), intent(in) :: ndf_aspc2_field_2
+ integer(kind=i_def), dimension(ndf_aspc2_field_2), intent(in) :: \
+map_aspc2_field_2
+ integer(kind=i_def), intent(in) :: cma_op_3_nrow
+ integer(kind=i_def), dimension(cma_op_3_nrow), intent(in) :: \
+cma_indirection_map_aspc1_field_1
+ integer(kind=i_def), intent(in) :: cma_op_3_ncol
+ integer(kind=i_def), dimension(cma_op_3_ncol), intent(in) :: \
+cma_indirection_map_aspc2_field_2
+ integer(kind=i_def), intent(in) :: undf_aspc1_field_1
+ integer(kind=i_def), intent(in) :: undf_aspc2_field_2
+ integer(kind=i_def), intent(in) :: cell
+ integer(kind=i_def), intent(in) :: ncell_2d
+ integer(kind=i_def), intent(in) :: cma_op_3_bandwidth
+ integer(kind=i_def), intent(in) :: cma_op_3_alpha
+ integer(kind=i_def), intent(in) :: cma_op_3_beta
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_m
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_p
+ real(kind=r_def), dimension(cma_op_3_bandwidth,cma_op_3_nrow,\
+ncell_2d), intent(in) :: cma_op_3
+ real(kind=r_def), dimension(undf_aspc1_field_1), intent(inout) :: \
+field_1_aspc1_field_1
+ real(kind=r_def), dimension(undf_aspc2_field_2), intent(in) :: \
+field_2_aspc2_field_2
+
+
+ end subroutine columnwise_op_app_kernel_code
+
+end module columnwise_op_app_kernel_mod
+"""
+ assert expected == result
def test_cma_app_same_space_stub_gen():
@@ -1496,37 +1538,45 @@ def test_cma_app_same_space_stub_gen():
"columnwise_op_app_same_fs_kernel_mod.F90"),
api=TEST_API)
- expected = (
- " MODULE columnwise_op_app_same_fs_kernel_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE columnwise_op_app_same_fs_kernel_code(cell, ncell_2d, "
- "field_1_aspc2_field_1, field_2_aspc2_field_1, cma_op_3, "
- "cma_op_3_nrow, cma_op_3_bandwidth, cma_op_3_alpha, cma_op_3_beta, "
- "cma_op_3_gamma_m, cma_op_3_gamma_p, ndf_aspc2_field_1, "
- "undf_aspc2_field_1, map_aspc2_field_1, "
- "cma_indirection_map_aspc2_field_1)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_aspc2_field_1\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc2_field_1) :: map_aspc2_field_1\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_3_nrow\n"
- " INTEGER(KIND=i_def), intent(in), dimension(cma_op_3_nrow) :: "
- "cma_indirection_map_aspc2_field_1\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_aspc2_field_1\n"
- " INTEGER(KIND=i_def), intent(in) :: cell, ncell_2d\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_3_bandwidth, "
- "cma_op_3_alpha, cma_op_3_beta, cma_op_3_gamma_m, cma_op_3_gamma_p\n"
- " REAL(KIND=r_solver), intent(in), dimension(cma_op_3_bandwidth,"
- "cma_op_3_nrow,ncell_2d) :: cma_op_3\n"
- " REAL(KIND=r_def), intent(inout), "
- "dimension(undf_aspc2_field_1) :: field_1_aspc2_field_1\n"
- " REAL(KIND=r_def), intent(in), "
- "dimension(undf_aspc2_field_1) :: field_2_aspc2_field_1\n"
- " END SUBROUTINE columnwise_op_app_same_fs_kernel_code\n"
- " END MODULE columnwise_op_app_same_fs_kernel_mod")
- assert expected in str(result)
+ expected = """\
+module columnwise_op_app_same_fs_kernel_mod
+ implicit none
+ public
+
+ contains
+ subroutine columnwise_op_app_same_fs_kernel_code(cell, ncell_2d, \
+field_1_aspc2_field_1, field_2_aspc2_field_1, cma_op_3, cma_op_3_nrow, \
+cma_op_3_bandwidth, cma_op_3_alpha, cma_op_3_beta, cma_op_3_gamma_m, \
+cma_op_3_gamma_p, ndf_aspc2_field_1, undf_aspc2_field_1, map_aspc2_field_1, \
+cma_indirection_map_aspc2_field_1)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: ndf_aspc2_field_1
+ integer(kind=i_def), dimension(ndf_aspc2_field_1), intent(in) :: \
+map_aspc2_field_1
+ integer(kind=i_def), intent(in) :: cma_op_3_nrow
+ integer(kind=i_def), dimension(cma_op_3_nrow), intent(in) :: \
+cma_indirection_map_aspc2_field_1
+ integer(kind=i_def), intent(in) :: undf_aspc2_field_1
+ integer(kind=i_def), intent(in) :: cell
+ integer(kind=i_def), intent(in) :: ncell_2d
+ integer(kind=i_def), intent(in) :: cma_op_3_bandwidth
+ integer(kind=i_def), intent(in) :: cma_op_3_alpha
+ integer(kind=i_def), intent(in) :: cma_op_3_beta
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_m
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_p
+ real(kind=r_def), dimension(cma_op_3_bandwidth,cma_op_3_nrow,\
+ncell_2d), intent(in) :: cma_op_3
+ real(kind=r_def), dimension(undf_aspc2_field_1), intent(inout) :: \
+field_1_aspc2_field_1
+ real(kind=r_def), dimension(undf_aspc2_field_1), intent(in) :: \
+field_2_aspc2_field_1
+
+
+ end subroutine columnwise_op_app_same_fs_kernel_code
+
+end module columnwise_op_app_same_fs_kernel_mod
+"""
+ assert expected == result
def test_cma_mul_stub_gen():
@@ -1535,38 +1585,56 @@ def test_cma_mul_stub_gen():
"columnwise_op_mul_kernel_mod.F90"),
api=TEST_API)
- expected = (
- " MODULE columnwise_op_mul_kernel_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE columnwise_op_mul_kernel_code(cell, ncell_2d, "
- "cma_op_1, cma_op_1_nrow, cma_op_1_ncol, cma_op_1_bandwidth, "
- "cma_op_1_alpha, cma_op_1_beta, cma_op_1_gamma_m, cma_op_1_gamma_p, "
- "cma_op_2, cma_op_2_nrow, cma_op_2_ncol, cma_op_2_bandwidth, "
- "cma_op_2_alpha, cma_op_2_beta, cma_op_2_gamma_m, cma_op_2_gamma_p, "
- "cma_op_3, cma_op_3_nrow, cma_op_3_ncol, cma_op_3_bandwidth, "
- "cma_op_3_alpha, cma_op_3_beta, cma_op_3_gamma_m, cma_op_3_gamma_p)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: cell, ncell_2d\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_1_nrow, "
- "cma_op_1_ncol, cma_op_1_bandwidth, cma_op_1_alpha, cma_op_1_beta, "
- "cma_op_1_gamma_m, cma_op_1_gamma_p\n"
- " REAL(KIND=r_solver), intent(in), dimension(cma_op_1_bandwidth,"
- "cma_op_1_nrow,ncell_2d) :: cma_op_1\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_2_nrow, "
- "cma_op_2_ncol, cma_op_2_bandwidth, cma_op_2_alpha, cma_op_2_beta, "
- "cma_op_2_gamma_m, cma_op_2_gamma_p\n"
- " REAL(KIND=r_solver), intent(in), dimension(cma_op_2_bandwidth,"
- "cma_op_2_nrow,ncell_2d) :: cma_op_2\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_3_nrow, "
- "cma_op_3_ncol, cma_op_3_bandwidth, cma_op_3_alpha, cma_op_3_beta, "
- "cma_op_3_gamma_m, cma_op_3_gamma_p\n"
- " REAL(KIND=r_solver), intent(inout), dimension("
- "cma_op_3_bandwidth,cma_op_3_nrow,ncell_2d) :: cma_op_3\n"
- " END SUBROUTINE columnwise_op_mul_kernel_code\n"
- " END MODULE columnwise_op_mul_kernel_mod")
- assert expected in str(result)
+ expected = """\
+module columnwise_op_mul_kernel_mod
+ implicit none
+ public
+
+ contains
+ subroutine columnwise_op_mul_kernel_code(cell, ncell_2d, cma_op_1, \
+cma_op_1_nrow, cma_op_1_ncol, cma_op_1_bandwidth, cma_op_1_alpha, \
+cma_op_1_beta, cma_op_1_gamma_m, cma_op_1_gamma_p, cma_op_2, cma_op_2_nrow, \
+cma_op_2_ncol, cma_op_2_bandwidth, cma_op_2_alpha, cma_op_2_beta, \
+cma_op_2_gamma_m, cma_op_2_gamma_p, cma_op_3, cma_op_3_nrow, cma_op_3_ncol, \
+cma_op_3_bandwidth, cma_op_3_alpha, cma_op_3_beta, cma_op_3_gamma_m, \
+cma_op_3_gamma_p)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: cell
+ integer(kind=i_def), intent(in) :: ncell_2d
+ integer(kind=i_def), intent(in) :: cma_op_1_nrow
+ integer(kind=i_def), intent(in) :: cma_op_1_ncol
+ integer(kind=i_def), intent(in) :: cma_op_1_bandwidth
+ integer(kind=i_def), intent(in) :: cma_op_1_alpha
+ integer(kind=i_def), intent(in) :: cma_op_1_beta
+ integer(kind=i_def), intent(in) :: cma_op_1_gamma_m
+ integer(kind=i_def), intent(in) :: cma_op_1_gamma_p
+ real(kind=r_def), dimension(cma_op_1_bandwidth,cma_op_1_nrow,ncell_2d)\
+, intent(in) :: cma_op_1
+ integer(kind=i_def), intent(in) :: cma_op_2_nrow
+ integer(kind=i_def), intent(in) :: cma_op_2_ncol
+ integer(kind=i_def), intent(in) :: cma_op_2_bandwidth
+ integer(kind=i_def), intent(in) :: cma_op_2_alpha
+ integer(kind=i_def), intent(in) :: cma_op_2_beta
+ integer(kind=i_def), intent(in) :: cma_op_2_gamma_m
+ integer(kind=i_def), intent(in) :: cma_op_2_gamma_p
+ real(kind=r_def), dimension(cma_op_2_bandwidth,cma_op_2_nrow,ncell_2d)\
+, intent(in) :: cma_op_2
+ integer(kind=i_def), intent(in) :: cma_op_3_nrow
+ integer(kind=i_def), intent(in) :: cma_op_3_ncol
+ integer(kind=i_def), intent(in) :: cma_op_3_bandwidth
+ integer(kind=i_def), intent(in) :: cma_op_3_alpha
+ integer(kind=i_def), intent(in) :: cma_op_3_beta
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_m
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_p
+ real(kind=r_def), dimension(cma_op_3_bandwidth,cma_op_3_nrow,ncell_2d)\
+, intent(in) :: cma_op_3
+
+
+ end subroutine columnwise_op_mul_kernel_code
+
+end module columnwise_op_mul_kernel_mod
+"""
+ assert expected == result
def test_cma_mul_with_scalars_stub_gen():
@@ -1576,38 +1644,55 @@ def test_cma_mul_with_scalars_stub_gen():
os.path.join(BASE_PATH, "columnwise_op_mul_2scalars_kernel_mod.F90"),
api=TEST_API)
- expected = (
- " MODULE columnwise_op_mul_2scalars_kernel_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE columnwise_op_mul_2scalars_kernel_code(cell, "
- "ncell_2d, cma_op_1, cma_op_1_nrow, cma_op_1_ncol, "
- "cma_op_1_bandwidth, cma_op_1_alpha, cma_op_1_beta, cma_op_1_gamma_m, "
- "cma_op_1_gamma_p, rscalar_2, "
- "cma_op_3, cma_op_3_nrow, cma_op_3_ncol, cma_op_3_bandwidth, "
- "cma_op_3_alpha, cma_op_3_beta, cma_op_3_gamma_m, cma_op_3_gamma_p, "
- "rscalar_4, "
- "cma_op_5, cma_op_5_nrow, cma_op_5_ncol, cma_op_5_bandwidth, "
- "cma_op_5_alpha, cma_op_5_beta, cma_op_5_gamma_m, cma_op_5_gamma_p)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: cell, ncell_2d\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_1_nrow, "
- "cma_op_1_ncol, cma_op_1_bandwidth, cma_op_1_alpha, cma_op_1_beta, "
- "cma_op_1_gamma_m, cma_op_1_gamma_p\n"
- " REAL(KIND=r_solver), intent(in), dimension(cma_op_1_bandwidth,"
- "cma_op_1_nrow,ncell_2d) :: cma_op_1\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_3_nrow, "
- "cma_op_3_ncol, cma_op_3_bandwidth, cma_op_3_alpha, cma_op_3_beta, "
- "cma_op_3_gamma_m, cma_op_3_gamma_p\n"
- " REAL(KIND=r_solver), intent(in), dimension(cma_op_3_bandwidth,"
- "cma_op_3_nrow,ncell_2d) :: cma_op_3\n"
- " INTEGER(KIND=i_def), intent(in) :: cma_op_5_nrow, "
- "cma_op_5_ncol, cma_op_5_bandwidth, cma_op_5_alpha, cma_op_5_beta, "
- "cma_op_5_gamma_m, cma_op_5_gamma_p\n"
- " REAL(KIND=r_solver), intent(inout), "
- "dimension(cma_op_5_bandwidth,cma_op_5_nrow,ncell_2d) :: cma_op_5\n"
- " REAL(KIND=r_def), intent(in) :: rscalar_2, rscalar_4\n"
- " END SUBROUTINE columnwise_op_mul_2scalars_kernel_code\n"
- " END MODULE columnwise_op_mul_2scalars_kernel_mod")
- assert expected in str(result)
+ expected = """\
+module columnwise_op_mul_2scalars_kernel_mod
+ implicit none
+ public
+
+ contains
+ subroutine columnwise_op_mul_2scalars_kernel_code(cell, ncell_2d, cma_op_1, \
+cma_op_1_nrow, cma_op_1_ncol, cma_op_1_bandwidth, cma_op_1_alpha, \
+cma_op_1_beta, cma_op_1_gamma_m, cma_op_1_gamma_p, rscalar_2, cma_op_3, \
+cma_op_3_nrow, cma_op_3_ncol, cma_op_3_bandwidth, cma_op_3_alpha, \
+cma_op_3_beta, cma_op_3_gamma_m, cma_op_3_gamma_p, rscalar_4, cma_op_5, \
+cma_op_5_nrow, cma_op_5_ncol, cma_op_5_bandwidth, cma_op_5_alpha, \
+cma_op_5_beta, cma_op_5_gamma_m, cma_op_5_gamma_p)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: cell
+ integer(kind=i_def), intent(in) :: ncell_2d
+ integer(kind=i_def), intent(in) :: cma_op_1_nrow
+ integer(kind=i_def), intent(in) :: cma_op_1_ncol
+ integer(kind=i_def), intent(in) :: cma_op_1_bandwidth
+ integer(kind=i_def), intent(in) :: cma_op_1_alpha
+ integer(kind=i_def), intent(in) :: cma_op_1_beta
+ integer(kind=i_def), intent(in) :: cma_op_1_gamma_m
+ integer(kind=i_def), intent(in) :: cma_op_1_gamma_p
+ real(kind=r_def), dimension(cma_op_1_bandwidth,cma_op_1_nrow,ncell_2d)\
+, intent(in) :: cma_op_1
+ integer(kind=i_def), intent(in) :: cma_op_3_nrow
+ integer(kind=i_def), intent(in) :: cma_op_3_ncol
+ integer(kind=i_def), intent(in) :: cma_op_3_bandwidth
+ integer(kind=i_def), intent(in) :: cma_op_3_alpha
+ integer(kind=i_def), intent(in) :: cma_op_3_beta
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_m
+ integer(kind=i_def), intent(in) :: cma_op_3_gamma_p
+ real(kind=r_def), dimension(cma_op_3_bandwidth,cma_op_3_nrow,ncell_2d)\
+, intent(in) :: cma_op_3
+ integer(kind=i_def), intent(in) :: cma_op_5_nrow
+ integer(kind=i_def), intent(in) :: cma_op_5_ncol
+ integer(kind=i_def), intent(in) :: cma_op_5_bandwidth
+ integer(kind=i_def), intent(in) :: cma_op_5_alpha
+ integer(kind=i_def), intent(in) :: cma_op_5_beta
+ integer(kind=i_def), intent(in) :: cma_op_5_gamma_m
+ integer(kind=i_def), intent(in) :: cma_op_5_gamma_p
+ real(kind=r_def), dimension(cma_op_5_bandwidth,cma_op_5_nrow,ncell_2d)\
+, intent(in) :: cma_op_5
+ real(kind=r_def), intent(in) :: rscalar_2
+ real(kind=r_def), intent(in) :: rscalar_4
+
+
+ end subroutine columnwise_op_mul_2scalars_kernel_code
+
+end module columnwise_op_mul_2scalars_kernel_mod
+"""
+ assert expected == result
diff --git a/src/psyclone/tests/dynamo0p3_lma_test.py b/src/psyclone/tests/dynamo0p3_lma_test.py
index 92b1064dc6..022c6ab80d 100644
--- a/src/psyclone/tests/dynamo0p3_lma_test.py
+++ b/src/psyclone/tests/dynamo0p3_lma_test.py
@@ -54,7 +54,7 @@
from psyclone.errors import GenerationError, InternalError
from psyclone.parse.algorithm import parse
from psyclone.parse.utils import ParseError
-from psyclone.psyGen import PSyFactory
+from psyclone.psyGen import PSyFactory, args_filter
from psyclone.tests.lfric_build import LFRicBuild
from psyclone.psyir.backend.visitor import VisitorError
from psyclone.psyir import symbols
@@ -474,12 +474,12 @@ def test_operator(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
assert (
- "SUBROUTINE invoke_0_testkern_operator_type(mm_w0, coord, a, qr)"
+ "subroutine invoke_0_testkern_operator_type(mm_w0, coord, a, qr)"
in generated_code)
- assert "TYPE(operator_type), intent(in) :: mm_w0" in generated_code
- assert "TYPE(operator_proxy_type) mm_w0_proxy" in generated_code
+ assert "type(operator_type), intent(in) :: mm_w0" in generated_code
+ assert "type(operator_proxy_type) :: mm_w0_proxy" in generated_code
assert "mm_w0_proxy = mm_w0%get_proxy()" in generated_code
- assert ("CALL testkern_operator_code(cell, nlayers_mm_w0, "
+ assert ("call testkern_operator_code(cell, nlayers_mm_w0, "
"mm_w0_proxy%ncell_3d, mm_w0_local_stencil, coord_1_data, "
"coord_2_data, coord_3_data, a, ndf_w0, undf_w0, "
"map_w0(:,cell), basis_w0_qr, diff_basis_w0_qr, np_xy_qr, "
@@ -499,144 +499,138 @@ def test_operator_different_spaces(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- module_declns = (
- " USE constants_mod, ONLY: r_def, i_def\n"
- " USE field_mod, ONLY: field_type, field_proxy_type\n"
- " USE operator_mod, ONLY: operator_type, operator_proxy_type\n")
- assert module_declns in generated_code
-
- decl_output = (
- " SUBROUTINE invoke_0_assemble_weak_derivative_w3_w2_kernel_type"
- "(mapping, coord, qr)\n"
- " USE assemble_weak_derivative_w3_w2_kernel_mod, ONLY: "
- "assemble_weak_derivative_w3_w2_kernel_code\n"
- " USE quadrature_xyoz_mod, ONLY: quadrature_xyoz_type, "
- "quadrature_xyoz_proxy_type\n"
- " USE function_space_mod, ONLY: BASIS, DIFF_BASIS\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " TYPE(field_type), intent(in) :: coord(3)\n"
- " TYPE(operator_type), intent(in) :: mapping\n"
- " TYPE(quadrature_xyoz_type), intent(in) :: qr\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " REAL(KIND=r_def), allocatable :: diff_basis_w0_qr(:,:,:,:), "
- "basis_w3_qr(:,:,:,:), diff_basis_w2_qr(:,:,:,:)\n"
- " INTEGER(KIND=i_def) diff_dim_w0, dim_w3, diff_dim_w2\n"
- " REAL(KIND=r_def), pointer :: weights_xy_qr(:) => null(), "
- "weights_z_qr(:) => null()\n"
- " INTEGER(KIND=i_def) np_xy_qr, np_z_qr\n"
- " INTEGER(KIND=i_def) nlayers_mapping\n"
- " REAL(KIND=r_def), pointer, dimension(:,:,:) :: "
- "mapping_local_stencil => null()\n"
- " TYPE(operator_proxy_type) mapping_proxy\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: coord_1_data => "
- "null(), coord_2_data => null(), coord_3_data => null()\n"
- " TYPE(field_proxy_type) coord_proxy(3)\n"
- " TYPE(quadrature_xyoz_proxy_type) qr_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w0(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w3, ndf_w2, ndf_w0, undf_w0\n"
- " INTEGER(KIND=i_def) max_halo_depth_mesh\n"
- " TYPE(mesh_type), pointer :: mesh => null()\n")
- assert decl_output in generated_code
- output = (
- " !\n"
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " mapping_proxy = mapping%get_proxy()\n"
- " mapping_local_stencil => mapping_proxy%local_stencil\n"
- " coord_proxy(1) = coord(1)%get_proxy()\n"
- " coord_1_data => coord_proxy(1)%data\n"
- " coord_proxy(2) = coord(2)%get_proxy()\n"
- " coord_2_data => coord_proxy(2)%data\n"
- " coord_proxy(3) = coord(3)%get_proxy()\n"
- " coord_3_data => coord_proxy(3)%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_mapping = mapping_proxy%fs_from%get_nlayers()\n"
- " !\n"
- " ! Create a mesh object\n"
- " !\n"
- " mesh => mapping_proxy%fs_from%get_mesh()\n"
- " max_halo_depth_mesh = mesh%get_halo_depth()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w0 => coord_proxy(1)%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = mapping_proxy%fs_to%get_ndf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = mapping_proxy%fs_from%get_ndf()\n"
- " !\n"
- " ! Initialise number of DoFs for w0\n"
- " !\n"
- " ndf_w0 = coord_proxy(1)%vspace%get_ndf()\n"
- " undf_w0 = coord_proxy(1)%vspace%get_undf()\n"
- " !\n"
- " ! Look-up quadrature variables\n"
- " !\n"
- " qr_proxy = qr%get_quadrature_proxy()\n"
- " np_xy_qr = qr_proxy%np_xy\n"
- " np_z_qr = qr_proxy%np_z\n"
- " weights_xy_qr => qr_proxy%weights_xy\n"
- " weights_z_qr => qr_proxy%weights_z\n"
- " !\n"
- " ! Allocate basis/diff-basis arrays\n"
- " !\n"
- " diff_dim_w0 = coord_proxy(1)%vspace%get_dim_space_diff()\n"
- " dim_w3 = mapping_proxy%fs_to%get_dim_space()\n"
- " diff_dim_w2 = mapping_proxy%fs_from%get_dim_space_diff()\n"
- " ALLOCATE (diff_basis_w0_qr(diff_dim_w0, ndf_w0, np_xy_qr, "
- "np_z_qr))\n"
- " ALLOCATE (basis_w3_qr(dim_w3, ndf_w3, np_xy_qr, np_z_qr))\n"
- " ALLOCATE (diff_basis_w2_qr(diff_dim_w2, ndf_w2, np_xy_qr, "
- "np_z_qr))\n"
- " !\n"
- " ! Compute basis/diff-basis arrays\n"
- " !\n"
- " CALL qr%compute_function(DIFF_BASIS, coord_proxy(1)%vspace, "
- "diff_dim_w0, ndf_w0, diff_basis_w0_qr)\n"
- " CALL qr%compute_function(BASIS, mapping_proxy%fs_to, "
- "dim_w3, ndf_w3, basis_w3_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, mapping_proxy%fs_from, "
- "diff_dim_w2, ndf_w2, diff_basis_w2_qr)\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = mesh%get_last_halo_cell(1)\n"
- " !\n"
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (coord_proxy(1)%is_dirty(depth=1)) THEN\n"
- " CALL coord_proxy(1)%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (coord_proxy(2)%is_dirty(depth=1)) THEN\n"
- " CALL coord_proxy(2)%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (coord_proxy(3)%is_dirty(depth=1)) THEN\n"
- " CALL coord_proxy(3)%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL assemble_weak_derivative_w3_w2_kernel_code(cell, "
- "nlayers_mapping, mapping_proxy%ncell_3d, mapping_local_stencil, "
- "coord_1_data, coord_2_data, coord_3_data, "
- "ndf_w3, basis_w3_qr, ndf_w2, diff_basis_w2_qr, ndf_w0, "
- "undf_w0, map_w0(:,cell), diff_basis_w0_qr, "
- "np_xy_qr, np_z_qr, weights_xy_qr, weights_z_qr)\n"
- " END DO\n"
- " !\n"
- " ! Deallocate basis arrays\n"
- " !\n"
- " DEALLOCATE (basis_w3_qr, diff_basis_w0_qr, diff_basis_w2_qr)\n"
- " !\n"
- " END SUBROUTINE invoke_0_assemble_weak_derivative_w3_w2_kernel_"
- "type")
- assert output in generated_code
+ assert """\
+module operator_example_psy
+ use constants_mod
+ use operator_mod, only : operator_proxy_type, operator_type
+ use field_mod, only : field_proxy_type, field_type
+ implicit none
+ public
+
+ contains
+ subroutine invoke_0_assemble_weak_derivative_w3_w2_kernel_type(mapping, \
+coord, qr)
+ use mesh_mod, only : mesh_type
+ use function_space_mod, only : BASIS, DIFF_BASIS
+ use quadrature_xyoz_mod, only : quadrature_xyoz_proxy_type, \
+quadrature_xyoz_type
+ use assemble_weak_derivative_w3_w2_kernel_mod, only : \
+assemble_weak_derivative_w3_w2_kernel_code
+ type(operator_type), intent(in) :: mapping
+ type(field_type), dimension(3), intent(in) :: coord
+ type(quadrature_xyoz_type), intent(in) :: qr
+ integer(kind=i_def) :: cell
+ integer(kind=i_def) :: loop0_start
+ integer(kind=i_def) :: loop0_stop
+ type(mesh_type), pointer :: mesh => null()
+ integer(kind=i_def) :: max_halo_depth_mesh
+ real(kind=r_def), pointer, dimension(:) :: coord_1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: coord_2_data => null()
+ real(kind=r_def), pointer, dimension(:) :: coord_3_data => null()
+ real(kind=r_def), pointer, dimension(:,:,:) :: mapping_local_stencil => \
+null()
+ integer(kind=i_def) :: nlayers_mapping
+ integer(kind=i_def) :: ndf_w3
+ integer(kind=i_def) :: ndf_w2
+ integer(kind=i_def) :: ndf_w0
+ integer(kind=i_def) :: undf_w0
+ integer(kind=i_def), pointer :: map_w0(:,:) => null()
+ type(field_proxy_type), dimension(3) :: coord_proxy
+ type(operator_proxy_type) :: mapping_proxy
+ integer(kind=i_def) :: np_xy_qr
+ integer(kind=i_def) :: np_z_qr
+ real(kind=r_def), pointer :: weights_xy_qr(:) => null()
+ real(kind=r_def), pointer :: weights_z_qr(:) => null()
+ type(quadrature_xyoz_proxy_type) :: qr_proxy
+ integer(kind=i_def) :: diff_dim_w0
+ integer(kind=i_def) :: dim_w3
+ integer(kind=i_def) :: diff_dim_w2
+ real(kind=r_def), allocatable :: diff_basis_w0_qr(:,:,:,:)
+ real(kind=r_def), allocatable :: basis_w3_qr(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_w2_qr(:,:,:,:)
+
+ ! Initialise field and/or operator proxies
+ mapping_proxy = mapping%get_proxy()
+ mapping_local_stencil => mapping_proxy%local_stencil
+ coord_proxy(1) = coord(1)%get_proxy()
+ coord_1_data => coord_proxy(1)%data
+ coord_proxy(2) = coord(2)%get_proxy()
+ coord_2_data => coord_proxy(2)%data
+ coord_proxy(3) = coord(3)%get_proxy()
+ coord_3_data => coord_proxy(3)%data
+
+ ! Initialise number of layers
+ nlayers_mapping = mapping_proxy%fs_from%get_nlayers()
+
+ ! Create a mesh object
+ mesh => mapping_proxy%fs_from%get_mesh()
+ max_halo_depth_mesh = mesh%get_halo_depth()
+
+ ! Look-up dofmaps for each function space
+ map_w0 => coord_proxy(1)%vspace%get_whole_dofmap()
+
+ ! Initialise number of DoFs for w3
+ ndf_w3 = mapping_proxy%fs_to%get_ndf()
+
+ ! Initialise number of DoFs for w2
+ ndf_w2 = mapping_proxy%fs_from%get_ndf()
+
+ ! Initialise number of DoFs for w0
+ ndf_w0 = coord_proxy(1)%vspace%get_ndf()
+ undf_w0 = coord_proxy(1)%vspace%get_undf()
+
+ ! Look-up quadrature variables
+ qr_proxy = qr%get_quadrature_proxy()
+ np_xy_qr = qr_proxy%np_xy
+ np_z_qr = qr_proxy%np_z
+ weights_xy_qr => qr_proxy%weights_xy
+ weights_z_qr => qr_proxy%weights_z
+
+ ! Allocate basis/diff-basis arrays
+ diff_dim_w0 = coord_proxy(1)%vspace%get_dim_space_diff()
+ dim_w3 = mapping_proxy%fs_to%get_dim_space()
+ diff_dim_w2 = mapping_proxy%fs_from%get_dim_space_diff()
+ ALLOCATE(diff_basis_w0_qr(diff_dim_w0,ndf_w0,np_xy_qr,np_z_qr))
+ ALLOCATE(basis_w3_qr(dim_w3,ndf_w3,np_xy_qr,np_z_qr))
+ ALLOCATE(diff_basis_w2_qr(diff_dim_w2,ndf_w2,np_xy_qr,np_z_qr))
+
+ ! Compute basis/diff-basis arrays
+ call qr%compute_function(DIFF_BASIS, coord_proxy(1)%vspace, diff_dim_w0, \
+ndf_w0, diff_basis_w0_qr)
+ call qr%compute_function(BASIS, mapping_proxy%fs_to, dim_w3, ndf_w3, \
+basis_w3_qr)
+ call qr%compute_function(DIFF_BASIS, mapping_proxy%fs_from, diff_dim_w2, \
+ndf_w2, diff_basis_w2_qr)
+
+ ! Set-up all of the loop bounds
+ loop0_start = 1
+ loop0_stop = mesh%get_last_halo_cell(1)
+
+ ! Call kernels and communication routines
+ if (coord_proxy(1)%is_dirty(depth=1)) then
+ call coord_proxy(1)%halo_exchange(depth=1)
+ end if
+ if (coord_proxy(2)%is_dirty(depth=1)) then
+ call coord_proxy(2)%halo_exchange(depth=1)
+ end if
+ if (coord_proxy(3)%is_dirty(depth=1)) then
+ call coord_proxy(3)%halo_exchange(depth=1)
+ end if
+ do cell = loop0_start, loop0_stop, 1
+ call assemble_weak_derivative_w3_w2_kernel_code(cell, nlayers_mapping, \
+mapping_proxy%ncell_3d, mapping_local_stencil, coord_1_data, coord_2_data, \
+coord_3_data, ndf_w3, basis_w3_qr, ndf_w2, diff_basis_w2_qr, ndf_w0, undf_w0, \
+map_w0(:,cell), diff_basis_w0_qr, np_xy_qr, np_z_qr, weights_xy_qr, \
+weights_z_qr)
+ enddo
+
+ ! Deallocate basis arrays
+ DEALLOCATE(basis_w3_qr, diff_basis_w0_qr, diff_basis_w2_qr)
+
+ end subroutine invoke_0_assemble_weak_derivative_w3_w2_kernel_type
+
+end module operator_example_psy
+""" == generated_code
def test_operator_nofield(tmpdir):
@@ -646,25 +640,25 @@ def test_operator_nofield(tmpdir):
"10.1_operator_nofield.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
- gen_code_str = str(psy.gen)
+ code_str = str(psy.gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
assert (
- "SUBROUTINE invoke_0_testkern_operator_nofield_type(mm_w2, coord, qr)"
- in gen_code_str)
- assert "TYPE(operator_type), intent(in) :: mm_w2" in gen_code_str
- assert "TYPE(operator_proxy_type) mm_w2_proxy" in gen_code_str
- assert "mm_w2_proxy = mm_w2%get_proxy()" in gen_code_str
- assert "mm_w2_local_stencil => mm_w2_proxy%local_stencil" in gen_code_str
- assert "undf_w2" not in gen_code_str
- assert "map_w2" not in gen_code_str
- assert ("CALL testkern_operator_nofield_code(cell, nlayers_mm_w2, "
+ "subroutine invoke_0_testkern_operator_nofield_type(mm_w2, coord, qr)"
+ in code_str)
+ assert "type(operator_type), intent(in) :: mm_w2" in code_str
+ assert "type(operator_proxy_type) :: mm_w2_proxy" in code_str
+ assert "mm_w2_proxy = mm_w2%get_proxy()" in code_str
+ assert "mm_w2_local_stencil => mm_w2_proxy%local_stencil" in code_str
+ assert "undf_w2" not in code_str
+ assert "map_w2" not in code_str
+ assert ("call testkern_operator_nofield_code(cell, nlayers_mm_w2, "
"mm_w2_proxy%ncell_3d, mm_w2_local_stencil, "
"coord_1_data, coord_2_data, coord_3_data, "
"ndf_w2, basis_w2_qr, ndf_w0, undf_w0, "
"map_w0(:,cell), diff_basis_w0_qr, np_xy_qr, np_z_qr, "
- "weights_xy_qr, weights_z_qr)" in gen_code_str)
+ "weights_xy_qr, weights_z_qr)" in code_str)
def test_operator_nofield_different_space(tmpdir):
@@ -716,10 +710,10 @@ def test_operator_no_dofmap_lookup():
"10.9_operator_first.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
- gen_code = str(psy.gen)
+ code = str(psy.gen)
# Check that we use the field and not the operator to look-up the dofmap
- assert "theta_proxy%vspace%get_whole_dofmap()" in gen_code
- assert gen_code.count("get_whole_dofmap") == 1
+ assert "theta_proxy%vspace%get_whole_dofmap()" in code
+ assert code.count("get_whole_dofmap") == 1
def test_operator_read_level1_halo(tmpdir):
@@ -747,9 +741,11 @@ def test_operator_read_level1_halo(tmpdir):
def test_operator_bc_kernel(tmpdir):
- ''' Tests that a kernel with a particular name is recognised as
- a kernel that applies boundary conditions to operators and that
- appropriate code is added to support this.
+ ''' Tests that a kernel with a particular name (starting by
+ 'bounday_dofs_') is recognised as a kernel that applies boundary conditions
+ to operators and that appropriate code is added to support this: the
+ function space to get the boundary_dofs is the fs_to of the associated
+ operator).
'''
_, invoke_info = parse(os.path.join(BASE_PATH,
@@ -758,12 +754,12 @@ def test_operator_bc_kernel(tmpdir):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
output1 = (
- "INTEGER(KIND=i_def), pointer :: boundary_dofs_op_a(:,:) => null()")
+ "integer(kind=i_def), pointer :: boundary_dofs_op_a(:,:) => null()")
assert output1 in generated_code
output2 = "boundary_dofs_op_a => op_a_proxy%fs_to%get_boundary_dofs()"
assert output2 in generated_code
output3 = (
- "CALL enforce_operator_bc_code(cell, nlayers_op_a, "
+ "call enforce_operator_bc_code(cell, nlayers_op_a, "
"op_a_proxy%ncell_3d, op_a_local_stencil, ndf_aspc1_op_a, "
"ndf_aspc2_op_a, boundary_dofs_op_a)")
assert output3 in generated_code
@@ -786,8 +782,11 @@ def test_operator_bc_kernel_fld_err(monkeypatch, dist_mem):
# Monkeypatch the argument object so that it thinks it is a
# field rather than an operator
monkeypatch.setattr(arg, "_argument_type", value="gh_field")
- # We have to add a tag to the Symbol table to get to the desired error.
+ # We have to populate the Symbol table to get to the desired error.
schedule.symbol_table.find_or_create_tag("op_a:data")
+ schedule.symbol_table.find_or_create("undf_aspc1_op_a",
+ symbol_type=symbols.DataSymbol,
+ datatype=symbols.UnresolvedType())
with pytest.raises(VisitorError) as excinfo:
_ = psy.gen
assert ("Expected an LMA operator from which to look-up boundary dofs "
@@ -807,6 +806,11 @@ def test_operator_bc_kernel_multi_args_err(dist_mem):
loop = schedule.children[0]
call = loop.loop_body[0]
arg = call.arguments.args[0]
+ # We have to populate the Symbol table to get to the desired error.
+ schedule.symbol_table.find_or_create_tag("op_a:data")
+ schedule.symbol_table.find_or_create("undf_aspc1_op_a",
+ symbol_type=symbols.DataSymbol,
+ datatype=symbols.UnresolvedType())
# Make the list of arguments invalid by duplicating (a copy of)
# this argument. We take a copy because otherwise, when we change
# the type of arg 1 below, we change it for both.
@@ -817,8 +821,6 @@ def test_operator_bc_kernel_multi_args_err(dist_mem):
"should only have 1 (an LMA operator)") in str(excinfo.value)
# And again but make the second argument a field this time
call.arguments.args[1]._argument_type = "gh_field"
- # We have to add a tag to the Symbol table to get to the desired error.
- schedule.symbol_table.find_or_create_tag("op_a:data")
with pytest.raises(VisitorError) as excinfo:
_ = psy.gen
assert ("Kernel enforce_operator_bc_code has 2 arguments when it "
@@ -887,7 +889,7 @@ def test_operator_bc_kernel_wrong_access_err(dist_mem, monkeypatch):
'''
-def test_operators():
+def test_operators(fortran_writer):
''' Test that operators are handled correctly for kernel stubs (except
for Wchi space as the fields on this space are read-only).
@@ -896,68 +898,89 @@ def test_operators():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
- output = (
- " MODULE dummy_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE dummy_code(cell, nlayers, op_1_ncell_3d, op_1, "
- "op_2_ncell_3d, op_2, op_3_ncell_3d, op_3, op_4_ncell_3d, op_4, "
- "op_5_ncell_3d, op_5, op_6_ncell_3d, op_6, op_7_ncell_3d, op_7, "
- "op_8_ncell_3d, op_8, op_9_ncell_3d, op_9, op_10_ncell_3d, op_10, "
- "op_11_ncell_3d, op_11, op_12_ncell_3d, op_12, op_13_ncell_3d, "
- "op_13, ndf_w0, ndf_w1, ndf_w2, ndf_w2h, ndf_w2v, ndf_w2broken, "
- "ndf_w2trace, ndf_w2htrace, ndf_w2vtrace, ndf_w3, ndf_wtheta, "
- "ndf_aspc1_op_12, ndf_adspc1_op_13)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w0, ndf_w1, ndf_w2, "
- "ndf_w2h, ndf_w2v, ndf_w2broken, ndf_w2trace, ndf_w2htrace, "
- "ndf_w2vtrace, ndf_w3, ndf_wtheta, ndf_aspc1_op_12, ndf_adspc1_op_13\n"
- " INTEGER(KIND=i_def), intent(in) :: cell\n"
- " INTEGER(KIND=i_def), intent(in) :: op_1_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_1_ncell_3d,"
- "ndf_w0,ndf_w0) :: op_1\n"
- " INTEGER(KIND=i_def), intent(in) :: op_2_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_2_ncell_3d,"
- "ndf_w1,ndf_w1) :: op_2\n"
- " INTEGER(KIND=i_def), intent(in) :: op_3_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_3_ncell_3d,"
- "ndf_w2,ndf_w2) :: op_3\n"
- " INTEGER(KIND=i_def), intent(in) :: op_4_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_4_ncell_3d,"
- "ndf_w2h,ndf_w2h) :: op_4\n"
- " INTEGER(KIND=i_def), intent(in) :: op_5_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_5_ncell_3d,"
- "ndf_w2v,ndf_w2v) :: op_5\n"
- " INTEGER(KIND=i_def), intent(in) :: op_6_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_6_ncell_3d,"
- "ndf_w2broken,ndf_w2broken) :: op_6\n"
- " INTEGER(KIND=i_def), intent(in) :: op_7_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_7_ncell_3d,"
- "ndf_w2trace,ndf_w2trace) :: op_7\n"
- " INTEGER(KIND=i_def), intent(in) :: op_8_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_8_ncell_3d,"
- "ndf_w2htrace,ndf_w2htrace) :: op_8\n"
- " INTEGER(KIND=i_def), intent(in) :: op_9_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_9_ncell_3d,"
- "ndf_w2vtrace,ndf_w2vtrace) :: op_9\n"
- " INTEGER(KIND=i_def), intent(in) :: op_10_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_10_ncell_3d,"
- "ndf_w3,ndf_w3) :: op_10\n"
- " INTEGER(KIND=i_def), intent(in) :: op_11_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_11_ncell_3d,"
- "ndf_wtheta,ndf_wtheta) :: op_11\n"
- " INTEGER(KIND=i_def), intent(in) :: op_12_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_12_ncell_3d,"
- "ndf_aspc1_op_12,ndf_aspc1_op_12) :: op_12\n"
- " INTEGER(KIND=i_def), intent(in) :: op_13_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_13_ncell_3d,"
- "ndf_adspc1_op_13,ndf_adspc1_op_13) :: op_13\n"
- " END SUBROUTINE dummy_code\n"
- " END MODULE dummy_mod")
- assert output in generated_code
+ generated_code = fortran_writer(kernel.gen_stub)
+ assert """\
+module dummy_mod
+ implicit none
+ public
+
+ contains
+ subroutine dummy_code(cell, nlayers, op_1_ncell_3d, op_1, op_2_ncell_3d, \
+op_2, op_3_ncell_3d, op_3, op_4_ncell_3d, op_4, op_5_ncell_3d, op_5, \
+op_6_ncell_3d, op_6, op_7_ncell_3d, op_7, op_8_ncell_3d, op_8, op_9_ncell_3d, \
+op_9, op_10_ncell_3d, op_10, op_11_ncell_3d, op_11, op_12_ncell_3d, op_12, \
+op_13_ncell_3d, op_13, ndf_w0, ndf_w1, ndf_w2, ndf_w2h, ndf_w2v, \
+ndf_w2broken, ndf_w2trace, ndf_w2htrace, ndf_w2vtrace, ndf_w3, ndf_wtheta, \
+ndf_aspc1_op_12, ndf_adspc1_op_13)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w0
+ integer(kind=i_def), intent(in) :: ndf_w1
+ integer(kind=i_def), intent(in) :: ndf_w2
+ integer(kind=i_def), intent(in) :: ndf_w2h
+ integer(kind=i_def), intent(in) :: ndf_w2v
+ integer(kind=i_def), intent(in) :: ndf_w2broken
+ integer(kind=i_def), intent(in) :: ndf_w2trace
+ integer(kind=i_def), intent(in) :: ndf_w2htrace
+ integer(kind=i_def), intent(in) :: ndf_w2vtrace
+ integer(kind=i_def), intent(in) :: ndf_w3
+ integer(kind=i_def), intent(in) :: ndf_wtheta
+ integer(kind=i_def), intent(in) :: ndf_aspc1_op_12
+ integer(kind=i_def), intent(in) :: ndf_adspc1_op_13
+ integer(kind=i_def), intent(in) :: cell
+ integer(kind=i_def), intent(in) :: op_1_ncell_3d
+ real(kind=r_def), dimension(op_1_ncell_3d,ndf_w0,ndf_w0), intent(inout) \
+:: op_1
+ integer(kind=i_def), intent(in) :: op_2_ncell_3d
+ real(kind=r_def), dimension(op_2_ncell_3d,ndf_w1,ndf_w1), intent(inout) \
+:: op_2
+ integer(kind=i_def), intent(in) :: op_3_ncell_3d
+ real(kind=r_def), dimension(op_3_ncell_3d,ndf_w2,ndf_w2), intent(in) \
+:: op_3
+ integer(kind=i_def), intent(in) :: op_4_ncell_3d
+ real(kind=r_def), dimension(op_4_ncell_3d,ndf_w2h,ndf_w2h), intent(in) \
+:: op_4
+ integer(kind=i_def), intent(in) :: op_5_ncell_3d
+ real(kind=r_def), dimension(op_5_ncell_3d,ndf_w2v,ndf_w2v), \
+intent(inout) :: op_5
+ integer(kind=i_def), intent(in) :: op_6_ncell_3d
+ real(kind=r_def), dimension(op_6_ncell_3d,ndf_w2broken,ndf_w2broken), \
+intent(inout) :: op_6
+ integer(kind=i_def), intent(in) :: op_7_ncell_3d
+ real(kind=r_def), dimension(op_7_ncell_3d,ndf_w2trace,ndf_w2trace), \
+intent(in) :: op_7
+ integer(kind=i_def), intent(in) :: op_8_ncell_3d
+ real(kind=r_def), dimension(op_8_ncell_3d,ndf_w2htrace,ndf_w2htrace), \
+intent(in) :: op_8
+ integer(kind=i_def), intent(in) :: op_9_ncell_3d
+ real(kind=r_def), dimension(op_9_ncell_3d,ndf_w2vtrace,ndf_w2vtrace), \
+intent(inout) :: op_9
+ integer(kind=i_def), intent(in) :: op_10_ncell_3d
+ real(kind=r_def), dimension(op_10_ncell_3d,ndf_w3,ndf_w3), intent(inout) \
+:: op_10
+ integer(kind=i_def), intent(in) :: op_11_ncell_3d
+ real(kind=r_def), dimension(op_11_ncell_3d,ndf_wtheta,ndf_wtheta\
+), intent(inout) :: op_11
+ integer(kind=i_def), intent(in) :: op_12_ncell_3d
+ real(kind=r_def), dimension(op_12_ncell_3d,ndf_aspc1_op_12,\
+ndf_aspc1_op_12), intent(in) :: op_12
+ integer(kind=i_def), intent(in) :: op_13_ncell_3d
+ real(kind=r_def), dimension(op_13_ncell_3d,ndf_adspc1_op_13,\
+ndf_adspc1_op_13), intent(in) :: op_13
+
+
+ end subroutine dummy_code
+
+end module dummy_mod
+""" == generated_code
+
+ # Try with unsupported types
+ lma_args = args_filter(kernel.arguments.args, arg_types=["gh_operator"])
+ lma_args[0]._intrinsic_type = "logical"
+ with pytest.raises(NotImplementedError) as err:
+ _ = kernel.gen_stub
+ assert ("Only REAL and INTEGER LMA Operator types are supported, but found"
+ " 'logical'" in str(err.value))
OPERATOR_DIFFERENT_SPACES = '''
@@ -977,7 +1000,7 @@ def test_operators():
'''
-def test_stub_operator_different_spaces():
+def test_stub_operator_different_spaces(fortran_writer):
''' Test that the correct function spaces are provided in the
correct order when generating a kernel stub with an operator on
different spaces.
@@ -988,7 +1011,7 @@ def test_stub_operator_different_spaces():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- result = str(kernel.gen_stub)
+ result = fortran_writer(kernel.gen_stub)
assert "(cell, nlayers, op_1_ncell_3d, op_1, ndf_w0, ndf_w1)" in result
assert "dimension(op_1_ncell_3d,ndf_w0,ndf_w1)" in result
# Check for discontinuous to- and from- spaces
@@ -999,7 +1022,7 @@ def test_stub_operator_different_spaces():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- result = str(kernel.gen_stub)
+ result = fortran_writer(kernel.gen_stub)
assert ("(cell, nlayers, op_1_ncell_3d, op_1, ndf_w3, ndf_adspc2_op_1)"
in result)
assert "dimension(op_1_ncell_3d,ndf_w3,ndf_adspc2_op_1)" in result
diff --git a/src/psyclone/tests/dynamo0p3_multigrid_test.py b/src/psyclone/tests/dynamo0p3_multigrid_test.py
index 8cdc90665c..3722d441b6 100644
--- a/src/psyclone/tests/dynamo0p3_multigrid_test.py
+++ b/src/psyclone/tests/dynamo0p3_multigrid_test.py
@@ -282,90 +282,88 @@ def test_field_prolong(tmpdir, dist_mem):
"22.0_intergrid_prolong.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=dist_mem).create(invoke_info)
- gen_code = str(psy.gen)
+ code = str(psy.gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
expected = (
- " USE prolong_test_kernel_mod, "
- "ONLY: prolong_test_kernel_code\n"
- " USE mesh_map_mod, ONLY: mesh_map_type\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " TYPE(field_type), intent(in) :: field1, field2\n"
- " INTEGER(KIND=i_def) cell\n")
- assert expected in gen_code
-
- expected = (
- " INTEGER(KIND=i_def) ncell_field1, ncpc_field1_field2_x, "
- "ncpc_field1_field2_y\n"
- " INTEGER(KIND=i_def), pointer :: "
- "cell_map_field2(:,:,:) => null()\n"
- " TYPE(mesh_map_type), pointer :: "
- "mmap_field1_field2 => null()\n")
+ " use mesh_mod, only : mesh_type\n"
+ " use mesh_map_mod, only : mesh_map_type\n"
+ " use prolong_test_kernel_mod, only : prolong_test_kernel_code\n"
+ " type(field_type), intent(in) :: field1\n"
+ " type(field_type), intent(in) :: field2\n"
+ " integer(kind=i_def) :: cell\n")
+ assert expected in code
+
+ assert "integer(kind=i_def) :: ncell_field1" in code
+ assert "integer(kind=i_def) :: ncpc_field1_field2_x" in code
+ assert "integer(kind=i_def) :: ncpc_field1_field2_y" in code
+ assert ("integer(kind=i_def), pointer :: "
+ "cell_map_field2(:,:,:) => null()\n" in code)
+ assert ("type(mesh_map_type), pointer :: "
+ "mmap_field1_field2 => null()\n" in code)
if dist_mem:
- expected += " INTEGER(KIND=i_def) max_halo_depth_mesh_field2\n"
- expected += " TYPE(mesh_type), pointer :: mesh_field2 => null()\n"
+ assert "integer(kind=i_def) :: max_halo_depth_mesh_field2" in code
+ assert "type(mesh_type), pointer :: mesh_field2 => null()\n" in code
if dist_mem:
- expected += " INTEGER(KIND=i_def) max_halo_depth_mesh_field1\n"
- expected += " TYPE(mesh_type), pointer :: mesh_field1 => null()\n"
- assert expected in gen_code
+ assert "integer(kind=i_def) :: max_halo_depth_mesh_field1" in code
+ assert "type(mesh_type), pointer :: mesh_field1 => null()\n" in code
expected = (
- " ! Look-up mesh objects and loop limits for inter-grid "
+ " ! Look-up mesh objects and loop limits for inter-grid "
"kernels\n"
- " !\n"
- " mesh_field1 => field1_proxy%vspace%get_mesh()\n")
+ " mesh_field1 => field1_proxy%vspace%get_mesh()\n")
if dist_mem:
- expected += (" max_halo_depth_mesh_field1 = mesh_field1%"
+ expected += (" max_halo_depth_mesh_field1 = mesh_field1%"
"get_halo_depth()\n")
- expected += " mesh_field2 => field2_proxy%vspace%get_mesh()\n"
+ expected += " mesh_field2 => field2_proxy%vspace%get_mesh()\n"
if dist_mem:
- expected += (" max_halo_depth_mesh_field2 = mesh_field2%"
+ expected += (" max_halo_depth_mesh_field2 = mesh_field2%"
"get_halo_depth()\n")
- expected += (" mmap_field1_field2 => mesh_field2%get_mesh_map"
+ expected += (" mmap_field1_field2 => mesh_field2%get_mesh_map"
"(mesh_field1)\n"
- " cell_map_field2 => mmap_field1_field2%"
+ " cell_map_field2 => mmap_field1_field2%"
"get_whole_cell_map()\n")
if dist_mem:
expected += (
- " ncell_field1 = mesh_field1%get_last_halo_cell("
+ " ncell_field1 = mesh_field1%get_last_halo_cell("
"depth=2)\n")
else:
expected += \
- " ncell_field1 = field1_proxy%vspace%get_ncell()\n"
+ " ncell_field1 = field1_proxy%vspace%get_ncell()\n"
expected += (
- " ncpc_field1_field2_x = mmap_field1_field2%"
+ " ncpc_field1_field2_x = mmap_field1_field2%"
"get_ntarget_cells_per_source_x()\n"
- " ncpc_field1_field2_y = mmap_field1_field2%"
+ " ncpc_field1_field2_y = mmap_field1_field2%"
"get_ntarget_cells_per_source_y()\n")
- assert expected in gen_code
+ assert expected in code
if dist_mem:
# We are writing to a continuous field on the fine mesh, we
# only need to halo swap to depth one on the coarse.
assert ("loop0_stop = mesh_field2%get_last_halo_cell(1)\n" in
- gen_code)
+ code)
expected = (
- " IF (field2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL field2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
- assert expected in gen_code
+ " if (field2_proxy%is_dirty(depth=1)) then\n"
+ " call field2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
+ assert expected in code
else:
- assert "loop0_stop = field2_proxy%vspace%get_ncell()\n" in gen_code
+ assert "loop0_stop = field2_proxy%vspace%get_ncell()\n" in code
expected = (
- " CALL prolong_test_kernel_code(nlayers_field1, "
+ " call prolong_test_kernel_code(nlayers_field1, "
"cell_map_field2(:,:,cell), ncpc_field1_field2_x, "
"ncpc_field1_field2_y, ncell_field1, field1_data, "
"field2_data, ndf_w1, undf_w1, map_w1, undf_w2, "
"map_w2(:,cell))\n"
- " END DO\n")
- assert expected in gen_code
+ " enddo\n")
+ assert expected in code
if dist_mem:
- set_dirty = " CALL field1_proxy%set_dirty()\n"
- assert set_dirty in gen_code
+ set_dirty = " call field1_proxy%set_dirty()\n"
+ assert set_dirty in code
def test_field_restrict(tmpdir, dist_mem, monkeypatch, annexed):
@@ -389,72 +387,70 @@ def test_field_restrict(tmpdir, dist_mem, monkeypatch, annexed):
assert LFRicBuild(tmpdir).code_compiles(psy)
defs = (
- " USE restrict_test_kernel_mod, "
- "ONLY: restrict_test_kernel_code\n"
- " USE mesh_map_mod, ONLY: mesh_map_type\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " TYPE(field_type), intent(in) :: field1, field2\n")
+ " use mesh_mod, only : mesh_type\n"
+ " use mesh_map_mod, only : mesh_map_type\n"
+ " use restrict_test_kernel_mod, "
+ "only : restrict_test_kernel_code\n"
+ " type(field_type), intent(in) :: field1\n"
+ " type(field_type), intent(in) :: field2\n")
assert defs in output
- defs2 = (
- " INTEGER(KIND=i_def) nlayers_field1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: field2_data => "
- "null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: field1_data => "
- "null()\n"
- " TYPE(field_proxy_type) field1_proxy, field2_proxy\n"
- " INTEGER(KIND=i_def), pointer :: "
- "map_aspc1_field1(:,:) => null(), map_aspc2_field2(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_aspc1_field1, undf_aspc1_field1, "
- "ndf_aspc2_field2, undf_aspc2_field2\n"
- " INTEGER(KIND=i_def) ncell_field2, ncpc_field2_field1_x, "
- "ncpc_field2_field1_y\n"
- " INTEGER(KIND=i_def), pointer :: "
- "cell_map_field1(:,:,:) => null()\n"
- " TYPE(mesh_map_type), pointer :: mmap_field2_field1 => "
- "null()\n")
+ assert "integer(kind=i_def) :: nlayers_field1\n" in output
+ assert ("real(kind=r_def), pointer, dimension(:) :: field2_data => "
+ "null()\n" in output)
+ assert ("real(kind=r_def), pointer, dimension(:) :: field1_data => "
+ "null()\n" in output)
+ assert "type(field_proxy_type) :: field1_proxy\n" in output
+ assert "type(field_proxy_type) :: field2_proxy\n" in output
+ assert ("integer(kind=i_def), pointer :: map_aspc1_field1(:,:) => "
+ "null()" in output)
+ assert ("integer(kind=i_def), pointer :: map_aspc2_field2(:,:) => "
+ "null()" in output)
+ assert "integer(kind=i_def) :: ndf_aspc1_field1\n" in output
+ assert "integer(kind=i_def) :: undf_aspc1_field1\n" in output
+ assert "integer(kind=i_def) :: ndf_aspc2_field2\n" in output
+ assert "integer(kind=i_def) :: undf_aspc2_field2\n" in output
+ assert "integer(kind=i_def) :: ncell_field2\n" in output
+ assert "integer(kind=i_def) :: ncpc_field2_field1_x\n" in output
+ assert "integer(kind=i_def) :: ncpc_field2_field1_y\n" in output
+ assert ("integer(kind=i_def), pointer :: cell_map_field1(:,:,:) => "
+ "null()\n" in output)
+ assert ("type(mesh_map_type), pointer :: mmap_field2_field1 => "
+ "null()\n" in output)
if dist_mem:
- defs2 += (
- " INTEGER(KIND=i_def) max_halo_depth_mesh_field2\n"
- " TYPE(mesh_type), pointer :: mesh_field2 => null()\n"
- " INTEGER(KIND=i_def) max_halo_depth_mesh_field1\n"
- " TYPE(mesh_type), pointer :: mesh_field1 => null()\n")
- else:
- defs2 += (
- " TYPE(mesh_type), pointer :: mesh_field2 => null()\n"
- " TYPE(mesh_type), pointer :: mesh_field1 => null()\n")
- assert defs2 in output
+ assert "integer(kind=i_def) :: max_halo_depth_mesh_field2\n" in output
+ assert "integer(kind=i_def) :: max_halo_depth_mesh_field1\n" in output
+ assert "type(mesh_type), pointer :: mesh_field2 => null()\n" in output
+ assert "type(mesh_type), pointer :: mesh_field1 => null()\n" in output
inits = (
- " !\n"
- " ! Look-up mesh objects and loop limits for inter-grid kernels\n"
- " !\n"
- " mesh_field2 => field2_proxy%vspace%get_mesh()\n")
+ "\n"
+ " ! Look-up mesh objects and loop limits for inter-grid kernels\n"
+ " mesh_field2 => field2_proxy%vspace%get_mesh()\n")
if dist_mem:
- inits += (" max_halo_depth_mesh_field2 = mesh_field2%"
+ inits += (" max_halo_depth_mesh_field2 = mesh_field2%"
"get_halo_depth()\n")
- inits += " mesh_field1 => field1_proxy%vspace%get_mesh()\n"
+ inits += " mesh_field1 => field1_proxy%vspace%get_mesh()\n"
if dist_mem:
- inits += (" max_halo_depth_mesh_field1 = mesh_field1%"
+ inits += (" max_halo_depth_mesh_field1 = mesh_field1%"
"get_halo_depth()\n")
inits += (
- " mmap_field2_field1 => mesh_field1%get_mesh_map(mesh_field2)\n"
- " cell_map_field1 => mmap_field2_field1%get_whole_cell_map()\n")
+ " mmap_field2_field1 => mesh_field1%get_mesh_map(mesh_field2)\n"
+ " cell_map_field1 => mmap_field2_field1%get_whole_cell_map()\n")
if dist_mem:
- inits += (" ncell_field2 = mesh_field2%"
+ inits += (" ncell_field2 = mesh_field2%"
"get_last_halo_cell(depth=2)\n")
else:
- inits += " ncell_field2 = field2_proxy%vspace%get_ncell()\n"
+ inits += " ncell_field2 = field2_proxy%vspace%get_ncell()\n"
inits += (
- " ncpc_field2_field1_x = mmap_field2_field1%"
+ " ncpc_field2_field1_x = mmap_field2_field1%"
"get_ntarget_cells_per_source_x()\n"
- " ncpc_field2_field1_y = mmap_field2_field1%"
+ " ncpc_field2_field1_y = mmap_field2_field1%"
"get_ntarget_cells_per_source_y()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_aspc1_field1 => field1_proxy%vspace%get_whole_dofmap()\n"
- " map_aspc2_field2 => field2_proxy%vspace%get_whole_dofmap()\n")
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_aspc1_field1 => field1_proxy%vspace%get_whole_dofmap()\n"
+ " map_aspc2_field2 => field2_proxy%vspace%get_whole_dofmap()\n")
assert inits in output
if dist_mem:
@@ -464,40 +460,37 @@ def test_field_restrict(tmpdir, dist_mem, monkeypatch, annexed):
# up-to-date values for it in the L1 halo.
if not annexed:
halo_exchs = (
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (field1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL field1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (field2_proxy%is_dirty(depth=2)) THEN\n"
- " CALL field2_proxy%halo_exchange(depth=2)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " ! Call kernels and communication routines\n"
+ " if (field1_proxy%is_dirty(depth=1)) then\n"
+ " call field1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (field2_proxy%is_dirty(depth=2)) then\n"
+ " call field2_proxy%halo_exchange(depth=2)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
else:
halo_exchs = (
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (field2_proxy%is_dirty(depth=2)) THEN\n"
- " CALL field2_proxy%halo_exchange(depth=2)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " ! Call kernels and communication routines\n"
+ " if (field2_proxy%is_dirty(depth=2)) then\n"
+ " call field2_proxy%halo_exchange(depth=2)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
assert halo_exchs in output
# We pass the whole dofmap for the fine mesh (we are reading from).
# This is associated with the second kernel argument.
kern_call = (
- " CALL restrict_test_kernel_code(nlayers_field1, "
+ " call restrict_test_kernel_code(nlayers_field1, "
"cell_map_field1(:,:,cell), ncpc_field2_field1_x, "
"ncpc_field2_field1_y, ncell_field2, "
"field1_data, field2_data, undf_aspc1_field1, "
"map_aspc1_field1(:,cell), ndf_aspc2_field2, undf_aspc2_field2, "
"map_aspc2_field2)\n"
- " END DO\n"
- " !\n")
+ " enddo\n")
assert kern_call in output
if dist_mem:
- set_dirty = " CALL field1_proxy%set_dirty()\n"
+ set_dirty = " call field1_proxy%set_dirty()\n"
assert set_dirty in output
@@ -541,58 +534,57 @@ def test_restrict_prolong_chain(tmpdir, dist_mem):
output = str(psy.gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
expected = (
- " ! Look-up mesh objects and loop limits for inter-grid "
+ " ! Look-up mesh objects and loop limits for inter-grid "
"kernels\n"
- " !\n"
- " mesh_fld_m => fld_m_proxy%vspace%get_mesh()\n")
+ " mesh_fld_m => fld_m_proxy%vspace%get_mesh()\n")
if dist_mem:
expected += (
- " max_halo_depth_mesh_fld_m = mesh_fld_m%get_halo_depth()\n"
- " mesh_cmap_fld_c => cmap_fld_c_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh_cmap_fld_c = "
+ " max_halo_depth_mesh_fld_m = mesh_fld_m%get_halo_depth()\n"
+ " mesh_cmap_fld_c => cmap_fld_c_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh_cmap_fld_c = "
"mesh_cmap_fld_c%get_halo_depth()\n"
)
else:
- expected += (" mesh_cmap_fld_c => "
+ expected += (" mesh_cmap_fld_c => "
"cmap_fld_c_proxy%vspace%get_mesh()\n")
expected += (
- " mmap_fld_m_cmap_fld_c => "
+ " mmap_fld_m_cmap_fld_c => "
"mesh_cmap_fld_c%get_mesh_map(mesh_fld_m)\n"
- " cell_map_cmap_fld_c => "
+ " cell_map_cmap_fld_c => "
"mmap_fld_m_cmap_fld_c%get_whole_cell_map()\n")
assert expected in output
if dist_mem:
expected = (
- " ncell_fld_m = mesh_fld_m%get_last_halo_cell(depth=2)\n"
- " ncpc_fld_m_cmap_fld_c_x = mmap_fld_m_cmap_fld_c%"
+ " ncell_fld_m = mesh_fld_m%get_last_halo_cell(depth=2)\n"
+ " ncpc_fld_m_cmap_fld_c_x = mmap_fld_m_cmap_fld_c%"
"get_ntarget_cells_per_source_x()\n"
- " ncpc_fld_m_cmap_fld_c_y = mmap_fld_m_cmap_fld_c%"
+ " ncpc_fld_m_cmap_fld_c_y = mmap_fld_m_cmap_fld_c%"
"get_ntarget_cells_per_source_y()\n"
- " mesh_fld_f => fld_f_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh_fld_f = mesh_fld_f%get_halo_depth()\n"
- " mmap_fld_f_fld_m => mesh_fld_m%get_mesh_map(mesh_fld_f)\n"
- " cell_map_fld_m => mmap_fld_f_fld_m%get_whole_cell_map()\n"
- " ncell_fld_f = mesh_fld_f%get_last_halo_cell(depth=2)\n"
- " ncpc_fld_f_fld_m_x = mmap_fld_f_fld_m%"
+ " mesh_fld_f => fld_f_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh_fld_f = mesh_fld_f%get_halo_depth()\n"
+ " mmap_fld_f_fld_m => mesh_fld_m%get_mesh_map(mesh_fld_f)\n"
+ " cell_map_fld_m => mmap_fld_f_fld_m%get_whole_cell_map()\n"
+ " ncell_fld_f = mesh_fld_f%get_last_halo_cell(depth=2)\n"
+ " ncpc_fld_f_fld_m_x = mmap_fld_f_fld_m%"
"get_ntarget_cells_per_source_x()\n"
- " ncpc_fld_f_fld_m_y = mmap_fld_f_fld_m%"
+ " ncpc_fld_f_fld_m_y = mmap_fld_f_fld_m%"
"get_ntarget_cells_per_source_y()\n")
else:
expected = (
- " ncell_fld_m = fld_m_proxy%vspace%get_ncell()\n"
- " ncpc_fld_m_cmap_fld_c_x = "
+ " ncell_fld_m = fld_m_proxy%vspace%get_ncell()\n"
+ " ncpc_fld_m_cmap_fld_c_x = "
"mmap_fld_m_cmap_fld_c%get_ntarget_cells_per_source_x()\n"
- " ncpc_fld_m_cmap_fld_c_y = "
+ " ncpc_fld_m_cmap_fld_c_y = "
"mmap_fld_m_cmap_fld_c%get_ntarget_cells_per_source_y()\n"
- " mesh_fld_f => fld_f_proxy%vspace%get_mesh()\n"
- " mmap_fld_f_fld_m => mesh_fld_m%get_mesh_map(mesh_fld_f)\n"
- " cell_map_fld_m => mmap_fld_f_fld_m%get_whole_cell_map()\n"
- " ncell_fld_f = fld_f_proxy%vspace%get_ncell()\n"
- " ncpc_fld_f_fld_m_x = mmap_fld_f_fld_m%"
+ " mesh_fld_f => fld_f_proxy%vspace%get_mesh()\n"
+ " mmap_fld_f_fld_m => mesh_fld_m%get_mesh_map(mesh_fld_f)\n"
+ " cell_map_fld_m => mmap_fld_f_fld_m%get_whole_cell_map()\n"
+ " ncell_fld_f = fld_f_proxy%vspace%get_ncell()\n"
+ " ncpc_fld_f_fld_m_x = mmap_fld_f_fld_m%"
"get_ntarget_cells_per_source_x()\n"
- " ncpc_fld_f_fld_m_y = mmap_fld_f_fld_m%"
+ " ncpc_fld_f_fld_m_y = mmap_fld_f_fld_m%"
"get_ntarget_cells_per_source_y()\n")
assert expected in output
@@ -605,29 +597,27 @@ def test_restrict_prolong_chain(tmpdir, dist_mem):
# Have two potential halo exchanges before 1st prolong because
# of continuous "read"er and "inc" writer
expected = (
- " IF (fld_m_proxy%is_dirty(depth=1)) THEN\n"
- " CALL fld_m_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (cmap_fld_c_proxy%is_dirty(depth=1)) THEN\n"
- " CALL cmap_fld_c_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1")
+ " if (fld_m_proxy%is_dirty(depth=1)) then\n"
+ " call fld_m_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (cmap_fld_c_proxy%is_dirty(depth=1)) then\n"
+ " call cmap_fld_c_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1")
assert expected in output
assert "loop0_stop = mesh_cmap_fld_c%get_last_halo_cell(1)\n" in output
# Since we loop into L1 halo of the coarse mesh, the L1 halo
# of the fine(r) mesh will now be clean. Therefore, no halo
# swap before the next prolongation required for fld_m
expected = (
- " ! Set halos dirty/clean for fields modified in the "
- "above loop\n"
- " !\n"
- " CALL fld_m_proxy%set_dirty()\n"
- " CALL fld_m_proxy%set_clean(1)\n"
- " !\n"
- " IF (fld_f_proxy%is_dirty(depth=1)) THEN\n"
- " CALL fld_f_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop1_start, loop1_stop, 1\n")
+ " ! Set halos dirty/clean for fields modified in the "
+ "above loop(s)\n"
+ " call fld_m_proxy%set_dirty()\n"
+ " call fld_m_proxy%set_clean(1)\n"
+ " if (fld_f_proxy%is_dirty(depth=1)) then\n"
+ " call fld_f_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop1_start, loop1_stop, 1\n")
assert expected in output
assert "loop1_stop = mesh_fld_m%get_last_halo_cell(1)\n" in output
# Again the L1 halo for fld_f will now be clean but for restriction
@@ -635,12 +625,11 @@ def test_restrict_prolong_chain(tmpdir, dist_mem):
# fld_f because the above loop over the coarser fld_m will go
# into the L2 halo of fld_f. However, it is a continuous field
# so only the L1 halo will actually be clean.
- expected = (" CALL fld_f_proxy%set_dirty()\n"
- " CALL fld_f_proxy%set_clean(1)\n"
- " !\n"
- " CALL fld_f_proxy%halo_exchange(depth=2)\n"
- " DO cell = loop2_start, loop2_stop, 1\n"
- " CALL restrict_test_kernel_code")
+ expected = (" call fld_f_proxy%set_dirty()\n"
+ " call fld_f_proxy%set_clean(1)\n"
+ " call fld_f_proxy%halo_exchange(depth=2)\n"
+ " do cell = loop2_start, loop2_stop, 1\n"
+ " call restrict_test_kernel_code")
assert expected in output
assert "loop2_stop = mesh_fld_m%get_last_halo_cell(1)\n" in output
@@ -648,11 +637,10 @@ def test_restrict_prolong_chain(tmpdir, dist_mem):
# clean. There's no set_clean() call on fld_m because it is
# only updated out to the L1 halo and it is a continuous field
# so the shared dofs in the L1 halo will still be dirty.
- expected = (" CALL fld_m_proxy%set_dirty()\n"
- " !\n"
- " CALL fld_m_proxy%halo_exchange(depth=2)\n"
- " DO cell = loop3_start, loop3_stop, 1\n"
- " CALL restrict_test_kernel_code")
+ expected = (" call fld_m_proxy%set_dirty()\n"
+ " call fld_m_proxy%halo_exchange(depth=2)\n"
+ " do cell = loop3_start, loop3_stop, 1\n"
+ " call restrict_test_kernel_code")
assert expected in output
assert "loop3_stop = mesh_cmap_fld_c%get_last_halo_cell(1)\n" in output
else:
@@ -661,28 +649,28 @@ def test_restrict_prolong_chain(tmpdir, dist_mem):
assert "loop2_stop = fld_m_proxy%vspace%get_ncell()\n" in output
assert "loop3_stop = cmap_fld_c_proxy%vspace%get_ncell()\n" in output
expected = (
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL prolong_test_kernel_code(nlayers_fld_m, "
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call prolong_test_kernel_code(nlayers_fld_m, "
"cell_map_cmap_fld_c(:,:,cell), ncpc_fld_m_cmap_fld_c_x, "
"ncpc_fld_m_cmap_fld_c_y, ncell_fld_m, fld_m_data, "
"cmap_fld_c_data, ndf_w1, undf_w1, map_w1, undf_w2, "
"map_w2(:,cell))\n"
- " END DO\n"
- " DO cell = loop1_start, loop1_stop, 1\n"
- " CALL prolong_test_kernel_code(nlayers_fld_f, "
+ " enddo\n"
+ " do cell = loop1_start, loop1_stop, 1\n"
+ " call prolong_test_kernel_code(nlayers_fld_f, "
"cell_map_fld_m(:,:,cell), ncpc_fld_f_fld_m_x, ncpc_fld_f_fld_m_y,"
" ncell_fld_f, fld_f_data, fld_m_data, ndf_w1, undf_w1, map_w1, "
"undf_w2, map_w2(:,cell))\n"
- " END DO\n"
- " DO cell = loop2_start, loop2_stop, 1\n"
- " CALL restrict_test_kernel_code(nlayers_fld_m, "
+ " enddo\n"
+ " do cell = loop2_start, loop2_stop, 1\n"
+ " call restrict_test_kernel_code(nlayers_fld_m, "
"cell_map_fld_m(:,:,cell), ncpc_fld_f_fld_m_x, ncpc_fld_f_fld_m_y,"
" ncell_fld_f, fld_m_data, fld_f_data, undf_aspc1_fld_m, "
"map_aspc1_fld_m(:,cell), ndf_aspc2_fld_f, undf_aspc2_fld_f, "
"map_aspc2_fld_f)\n"
- " END DO\n"
- " DO cell = loop3_start, loop3_stop, 1\n"
- " CALL restrict_test_kernel_code(nlayers_cmap_fld_c, "
+ " enddo\n"
+ " do cell = loop3_start, loop3_stop, 1\n"
+ " call restrict_test_kernel_code(nlayers_cmap_fld_c, "
"cell_map_cmap_fld_c(:,:,cell), ncpc_fld_m_cmap_fld_c_x, "
"ncpc_fld_m_cmap_fld_c_y, ncell_fld_m, cmap_fld_c_data, "
"fld_m_data, undf_aspc1_cmap_fld_c, map_aspc1_cmap_fld_c"
@@ -703,7 +691,7 @@ def test_fine_halo_read():
assert hexch._compute_halo_depth().value == '2'
call = schedule.children[6]
field = call.args[1]
- hra = HaloReadAccess(field, schedule.symbol_table)
+ hra = HaloReadAccess(field, schedule)
assert hra._var_depth.debug_string() == "2 * 1"
@@ -730,8 +718,8 @@ def test_prolong_vector(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert "TYPE(field_type), intent(in) :: field1(3)" in output
- assert "TYPE(field_proxy_type) field1_proxy(3)" in output
+ assert "type(field_type), dimension(3), intent(in) :: field1" in output
+ assert "type(field_proxy_type), dimension(3) :: field1_proxy" in output
# Make sure we always index into the field arrays
assert " field1%" not in output
assert " field2%" not in output
@@ -740,12 +728,12 @@ def test_prolong_vector(tmpdir):
"field2_2_data, field2_3_data, ndf_w1" in output)
for idx in [1, 2, 3]:
assert (
- f" IF (field2_proxy({idx})%is_dirty(depth=1)) THEN\n"
- f" CALL field2_proxy({idx})%halo_exchange(depth=1)\n"
- f" END IF\n" in output)
+ f" if (field2_proxy({idx})%is_dirty(depth=1)) then\n"
+ f" call field2_proxy({idx})%halo_exchange(depth=1)\n"
+ f" end if\n" in output)
assert f"field1_proxy({idx}) = field1({idx})%get_proxy()" in output
- assert f"CALL field1_proxy({idx})%set_dirty()" in output
- assert f"CALL field1_proxy({idx})%set_clean(1)" in output
+ assert f"call field1_proxy({idx})%set_dirty()" in output
+ assert f"call field1_proxy({idx})%set_clean(1)" in output
def test_no_stub_gen():
@@ -772,35 +760,32 @@ def test_restrict_prolong_chain_anyd(tmpdir):
output = str(psy.gen)
# Check maps for any_discontinuous_space
expected = (
- " map_adspc1_fld_m => fld_m_proxy%vspace%get_whole_dofmap()\n"
- " map_adspc2_fld_f => fld_f_proxy%vspace%get_whole_dofmap()\n"
- " map_adspc1_fld_c => fld_c_proxy%vspace%get_whole_dofmap()\n"
- " map_adspc2_fld_m => fld_m_proxy%vspace%get_whole_dofmap()\n")
+ " map_adspc1_fld_m => fld_m_proxy%vspace%get_whole_dofmap()\n"
+ " map_adspc2_fld_f => fld_f_proxy%vspace%get_whole_dofmap()\n"
+ " map_adspc1_fld_c => fld_c_proxy%vspace%get_whole_dofmap()\n"
+ " map_adspc2_fld_m => fld_m_proxy%vspace%get_whole_dofmap()\n")
assert expected in output
# Check ndf and undf initialisations the second restrict kernel
# (fld_m to fld_c)
expected = (
- " ! Initialise number of DoFs for adspc1_fld_c\n"
- " !\n"
- " ndf_adspc1_fld_c = fld_c_proxy%vspace%get_ndf()\n"
- " undf_adspc1_fld_c = fld_c_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for adspc2_fld_m\n"
- " !\n"
- " ndf_adspc2_fld_m = fld_m_proxy%vspace%get_ndf()\n"
- " undf_adspc2_fld_m = fld_m_proxy%vspace%get_undf()\n")
+ " ! Initialise number of DoFs for adspc1_fld_c\n"
+ " ndf_adspc1_fld_c = fld_c_proxy%vspace%get_ndf()\n"
+ " undf_adspc1_fld_c = fld_c_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for adspc2_fld_m\n"
+ " ndf_adspc2_fld_m = fld_m_proxy%vspace%get_ndf()\n"
+ " undf_adspc2_fld_m = fld_m_proxy%vspace%get_undf()\n")
assert expected in output
# Check an example of restrict loop and all upper loop bounds
expected = (
- " ! Call kernels and communication routines\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL restrict_kernel_code(nlayers_fld_m, "
+ " ! Call kernels and communication routines\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call restrict_kernel_code(nlayers_fld_m, "
"cell_map_fld_m(:,:,cell), ncpc_fld_f_fld_m_x, ncpc_fld_f_fld_m_y, "
"ncell_fld_f, fld_m_data, fld_f_data, undf_adspc1_fld_m, "
"map_adspc1_fld_m(:,cell), ndf_adspc2_fld_f, "
"undf_adspc2_fld_f, map_adspc2_fld_f)\n"
- " END DO\n")
+ " enddo\n")
assert expected in output
assert "loop0_stop = mesh_fld_m%get_last_edge_cell()\n" in output
assert "loop1_stop = mesh_fld_c%get_last_edge_cell()" in output
@@ -809,6 +794,8 @@ def test_restrict_prolong_chain_anyd(tmpdir):
# Check compilation
assert LFRicBuild(tmpdir).code_compiles(psy)
+ psy = PSyFactory(API, distributed_memory=True).create(invoke_info)
+ schedule = psy.invokes.invoke_list[0].schedule
# Now do some transformations
otrans = DynamoOMPParallelLoopTrans()
ctrans = Dynamo0p3ColourTrans()
@@ -819,24 +806,24 @@ def test_restrict_prolong_chain_anyd(tmpdir):
otrans.apply(schedule[4].loop_body[0])
output = str(psy.gen)
expected = (
- " !$omp parallel do default(shared), private(cell), "
+ " !$omp parallel do default(shared), private(cell), "
"schedule(static)\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL restrict_kernel_code")
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call restrict_kernel_code")
assert expected in output
assert "loop0_stop = mesh_fld_m%get_last_edge_cell()\n" in output
expected = (
- " DO colour = loop2_start, loop2_stop, 1\n"
- " !$omp parallel do default(shared), private(cell), "
+ " do colour = loop2_start, loop2_stop, 1\n"
+ " !$omp parallel do default(shared), private(cell), "
"schedule(static)\n"
- " DO cell = loop3_start, "
+ " do cell = loop3_start, "
"last_halo_cell_all_colours_fld_c(colour,1), 1\n"
- " CALL prolong_test_kernel_code")
+ " call prolong_test_kernel_code")
assert expected in output
assert "loop2_stop = ncolour_fld_c\n" in output
# Try to apply colouring to the second restrict kernel
with pytest.raises(TransformationError) as excinfo:
- ctrans.apply(schedule.children[1])
+ ctrans.apply(schedule.walk(Loop, stop_type=Loop)[1])
assert ("Loops iterating over a discontinuous function space "
"are not currently supported." in str(excinfo.value))
diff --git a/src/psyclone/tests/dynamo0p3_quadrature_test.py b/src/psyclone/tests/dynamo0p3_quadrature_test.py
index e1a2e749f1..0f3fcb58b9 100644
--- a/src/psyclone/tests/dynamo0p3_quadrature_test.py
+++ b/src/psyclone/tests/dynamo0p3_quadrature_test.py
@@ -48,9 +48,9 @@
from psyclone.domain.lfric import LFRicConstants, LFRicKern, LFRicKernMetadata
from psyclone.dynamo0p3 import DynBasisFunctions, qr_basis_alloc_args
from psyclone.errors import InternalError
-from psyclone.f2pygen import ModuleGen
from psyclone.parse.algorithm import KernelCall, parse
from psyclone.psyGen import CodedKern, PSyFactory
+from psyclone.psyir.symbols import DataSymbol, UnresolvedType
from psyclone.tests.lfric_build import LFRicBuild
# constants
@@ -76,163 +76,168 @@ def test_field_xyoz(tmpdir):
psy = PSyFactory(API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
- assert LFRicBuild(tmpdir).code_compiles(psy)
-
module_declns = (
- " USE constants_mod, ONLY: r_def, i_def\n"
- " USE field_mod, ONLY: field_type, field_proxy_type\n")
+ " use constants_mod\n"
+ " use field_mod, only : field_proxy_type, field_type\n")
assert module_declns in generated_code
- output_decls = (
- " SUBROUTINE invoke_0_testkern_qr_type(f1, f2, m1, a, m2, istp,"
+ assert (
+ " subroutine invoke_0_testkern_qr_type(f1, f2, m1, a, m2, istp,"
" qr)\n"
- " USE testkern_qr_mod, ONLY: testkern_qr_code\n"
- " USE quadrature_xyoz_mod, ONLY: quadrature_xyoz_type, "
- "quadrature_xyoz_proxy_type\n"
- " USE function_space_mod, ONLY: BASIS, DIFF_BASIS\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " REAL(KIND=r_def), intent(in) :: a\n"
- " INTEGER(KIND=i_def), intent(in) :: istp\n"
- " TYPE(field_type), intent(in) :: f1, f2, m1, m2\n"
- " TYPE(quadrature_xyoz_type), intent(in) :: qr\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " REAL(KIND=r_def), allocatable :: basis_w1_qr(:,:,:,:), "
- "diff_basis_w2_qr(:,:,:,:), basis_w3_qr(:,:,:,:), "
- "diff_basis_w3_qr(:,:,:,:)\n"
- " INTEGER(KIND=i_def) dim_w1, diff_dim_w2, dim_w3, diff_dim_w3\n"
- " REAL(KIND=r_def), pointer :: weights_xy_qr(:) => null(), "
- "weights_z_qr(:) => null()\n"
- " INTEGER(KIND=i_def) np_xy_qr, np_z_qr\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, f2_proxy, m1_proxy, m2_proxy\n"
- " TYPE(quadrature_xyoz_proxy_type) qr_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w1(:,:) => null(), "
- "map_w2(:,:) => null(), map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, "
- "ndf_w3, undf_w3\n")
- assert output_decls in generated_code
+ " use mesh_mod, only : mesh_type\n"
+ " use function_space_mod, only : BASIS, DIFF_BASIS\n"
+ " use quadrature_xyoz_mod, only : quadrature_xyoz_proxy_type, "
+ "quadrature_xyoz_type\n"
+ " use testkern_qr_mod, only : testkern_qr_code\n" in generated_code)
+ assert """
+ type(field_type), intent(in) :: f1
+ type(field_type), intent(in) :: f2
+ type(field_type), intent(in) :: m1
+ real(kind=r_def), intent(in) :: a
+ type(field_type), intent(in) :: m2
+ integer(kind=i_def), intent(in) :: istp
+ type(quadrature_xyoz_type), intent(in) :: qr
+ integer(kind=i_def) :: cell
+ integer(kind=i_def) :: loop0_start
+ integer(kind=i_def) :: loop0_stop
+ type(mesh_type), pointer :: mesh => null()
+ integer(kind=i_def) :: max_halo_depth_mesh
+ real(kind=r_def), pointer, dimension(:) :: f1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f2_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m2_data => null()
+ integer(kind=i_def) :: nlayers_f1
+ integer(kind=i_def) :: ndf_w1
+ integer(kind=i_def) :: undf_w1
+ integer(kind=i_def) :: ndf_w2
+ integer(kind=i_def) :: undf_w2
+ integer(kind=i_def) :: ndf_w3
+ integer(kind=i_def) :: undf_w3
+ integer(kind=i_def), pointer :: map_w1(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2(:,:) => null()
+ integer(kind=i_def), pointer :: map_w3(:,:) => null()
+ type(field_proxy_type) :: f1_proxy
+ type(field_proxy_type) :: f2_proxy
+ type(field_proxy_type) :: m1_proxy
+ type(field_proxy_type) :: m2_proxy
+ integer(kind=i_def) :: np_xy_qr
+ integer(kind=i_def) :: np_z_qr
+ real(kind=r_def), pointer :: weights_xy_qr(:) => null()
+ real(kind=r_def), pointer :: weights_z_qr(:) => null()
+ type(quadrature_xyoz_proxy_type) :: qr_proxy
+ integer(kind=i_def) :: dim_w1
+ integer(kind=i_def) :: diff_dim_w2
+ integer(kind=i_def) :: dim_w3
+ integer(kind=i_def) :: diff_dim_w3
+ real(kind=r_def), allocatable :: basis_w1_qr(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_w2_qr(:,:,:,:)
+ real(kind=r_def), allocatable :: basis_w3_qr(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_w3_qr(:,:,:,:)
+""" in generated_code
init_output = (
- " !\n"
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " f2_proxy = f2%get_proxy()\n"
- " f2_data => f2_proxy%data\n"
- " m1_proxy = m1%get_proxy()\n"
- " m1_data => m1_proxy%data\n"
- " m2_proxy = m2%get_proxy()\n"
- " m2_data => m2_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
- " !\n"
- " ! Create a mesh object\n"
- " !\n"
- " mesh => f1_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh = mesh%get_halo_depth()\n"
- " !\n"
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
- " undf_w1 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
- " undf_w2 = f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = m2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Look-up quadrature variables\n"
- " !\n"
- " qr_proxy = qr%get_quadrature_proxy()\n"
- " np_xy_qr = qr_proxy%np_xy\n"
- " np_z_qr = qr_proxy%np_z\n"
- " weights_xy_qr => qr_proxy%weights_xy\n"
- " weights_z_qr => qr_proxy%weights_z\n")
+ "\n"
+ " ! Initialise field and/or operator proxies\n"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ " f2_proxy = f2%get_proxy()\n"
+ " f2_data => f2_proxy%data\n"
+ " m1_proxy = m1%get_proxy()\n"
+ " m1_data => m1_proxy%data\n"
+ " m2_proxy = m2%get_proxy()\n"
+ " m2_data => m2_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
+ "\n"
+ " ! Create a mesh object\n"
+ " mesh => f1_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh = mesh%get_halo_depth()\n"
+ "\n"
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
+ " undf_w1 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2\n"
+ " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
+ " undf_w2 = f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = m2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Look-up quadrature variables\n"
+ " qr_proxy = qr%get_quadrature_proxy()\n"
+ " np_xy_qr = qr_proxy%np_xy\n"
+ " np_z_qr = qr_proxy%np_z\n"
+ " weights_xy_qr => qr_proxy%weights_xy\n"
+ " weights_z_qr => qr_proxy%weights_z\n")
assert init_output in generated_code
compute_output = (
- " !\n"
- " ! Allocate basis/diff-basis arrays\n"
- " !\n"
- " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
- " diff_dim_w2 = f2_proxy%vspace%get_dim_space_diff()\n"
- " dim_w3 = m2_proxy%vspace%get_dim_space()\n"
- " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (basis_w1_qr(dim_w1, ndf_w1, np_xy_qr, np_z_qr))\n"
- " ALLOCATE (diff_basis_w2_qr(diff_dim_w2, ndf_w2, np_xy_qr, "
+ "\n"
+ " ! Allocate basis/diff-basis arrays\n"
+ " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w2 = f2_proxy%vspace%get_dim_space_diff()\n"
+ " dim_w3 = m2_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(basis_w1_qr(dim_w1,ndf_w1,np_xy_qr,np_z_qr))\n"
+ " ALLOCATE(diff_basis_w2_qr(diff_dim_w2,ndf_w2,np_xy_qr,"
"np_z_qr))\n"
- " ALLOCATE (basis_w3_qr(dim_w3, ndf_w3, np_xy_qr, np_z_qr))\n"
- " ALLOCATE (diff_basis_w3_qr(diff_dim_w3, ndf_w3, np_xy_qr, "
+ " ALLOCATE(basis_w3_qr(dim_w3,ndf_w3,np_xy_qr,np_z_qr))\n"
+ " ALLOCATE(diff_basis_w3_qr(diff_dim_w3,ndf_w3,np_xy_qr,"
"np_z_qr))\n"
- " !\n"
- " ! Compute basis/diff-basis arrays\n"
- " !\n"
- " CALL qr%compute_function(BASIS, f1_proxy%vspace, dim_w1, "
+ "\n"
+ " ! Compute basis/diff-basis arrays\n"
+ " call qr%compute_function(BASIS, f1_proxy%vspace, dim_w1, "
"ndf_w1, basis_w1_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, f2_proxy%vspace, "
+ " call qr%compute_function(DIFF_BASIS, f2_proxy%vspace, "
"diff_dim_w2, ndf_w2, diff_basis_w2_qr)\n"
- " CALL qr%compute_function(BASIS, m2_proxy%vspace, dim_w3, "
+ " call qr%compute_function(BASIS, m2_proxy%vspace, dim_w3, "
"ndf_w3, basis_w3_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, m2_proxy%vspace, "
+ " call qr%compute_function(DIFF_BASIS, m2_proxy%vspace, "
"diff_dim_w3, ndf_w3, diff_basis_w3_qr)\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = mesh%get_last_halo_cell(1)\n"
- " !\n"
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_qr_code(nlayers_f1, f1_data, f2_data, "
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = mesh%get_last_halo_cell(1)\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=1)) then\n"
+ " call m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_qr_code(nlayers_f1, f1_data, f2_data, "
"m1_data, a, m2_data, istp, ndf_w1, undf_w1, "
"map_w1(:,cell), basis_w1_qr, ndf_w2, undf_w2, map_w2(:,cell), "
"diff_basis_w2_qr, ndf_w3, undf_w3, map_w3(:,cell), basis_w3_qr, "
"diff_basis_w3_qr, np_xy_qr, np_z_qr, weights_xy_qr, weights_z_qr)\n"
- " END DO\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the above loop\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n"
- " !\n"
- " ! Deallocate basis arrays\n"
- " !\n"
- " DEALLOCATE (basis_w1_qr, basis_w3_qr, diff_basis_w2_qr, "
+ " enddo\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the above loop(s)"
+ "\n"
+ " call f1_proxy%set_dirty()\n"
+ "\n"
+ " ! Deallocate basis arrays\n"
+ " DEALLOCATE(basis_w1_qr, basis_w3_qr, diff_basis_w2_qr, "
"diff_basis_w3_qr)\n"
- " !\n"
- " END SUBROUTINE invoke_0_testkern_qr_type"
+ "\n"
+ " end subroutine invoke_0_testkern_qr_type"
)
assert compute_output in generated_code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_edge_qr(tmpdir, dist_mem):
@@ -242,36 +247,36 @@ def test_edge_qr(tmpdir, dist_mem):
api=API)
psy = PSyFactory(API, distributed_memory=dist_mem).create(invoke_info)
assert LFRicBuild(tmpdir).code_compiles(psy)
- gen_code = str(psy.gen).lower()
+ code = str(psy.gen).lower()
- assert ("use quadrature_edge_mod, only: quadrature_edge_type, "
- "quadrature_edge_proxy_type\n" in gen_code)
- assert "type(quadrature_edge_type), intent(in) :: qr\n" in gen_code
- assert "integer(kind=i_def) np_xyz_qr, nedges_qr" in gen_code
+ assert ("use quadrature_edge_mod, only : quadrature_edge_proxy_type, "
+ "quadrature_edge_type\n" in code)
+ assert "type(quadrature_edge_type), intent(in) :: qr\n" in code
+ assert "integer(kind=i_def) :: np_xyz_qr" in code
+ assert "integer(kind=i_def) :: nedges_qr" in code
assert (
- " qr_proxy = qr%get_quadrature_proxy()\n"
- " np_xyz_qr = qr_proxy%np_xyz\n"
- " nedges_qr = qr_proxy%nedges\n"
- " weights_xyz_qr => qr_proxy%weights_xyz\n" in gen_code)
+ " qr_proxy = qr%get_quadrature_proxy()\n"
+ " np_xyz_qr = qr_proxy%np_xyz\n"
+ " nedges_qr = qr_proxy%nedges\n"
+ " weights_xyz_qr => qr_proxy%weights_xyz\n" in code)
assert (
- " ! compute basis/diff-basis arrays\n"
- " !\n"
- " call qr%compute_function(basis, f1_proxy%vspace, dim_w1, "
+ " ! compute basis/diff-basis arrays\n"
+ " call qr%compute_function(basis, f1_proxy%vspace, dim_w1, "
"ndf_w1, basis_w1_qr)\n"
- " call qr%compute_function(diff_basis, f2_proxy%vspace, "
+ " call qr%compute_function(diff_basis, f2_proxy%vspace, "
"diff_dim_w2, ndf_w2, diff_basis_w2_qr)\n"
- " call qr%compute_function(basis, m2_proxy%vspace, dim_w3, "
+ " call qr%compute_function(basis, m2_proxy%vspace, dim_w3, "
"ndf_w3, basis_w3_qr)\n"
- " call qr%compute_function(diff_basis, m2_proxy%vspace, "
- "diff_dim_w3, ndf_w3, diff_basis_w3_qr)\n" in gen_code)
+ " call qr%compute_function(diff_basis, m2_proxy%vspace, "
+ "diff_dim_w3, ndf_w3, diff_basis_w3_qr)\n" in code)
assert ("call testkern_qr_edges_code(nlayers_f1, f1_data, "
"f2_data, m1_data, a, m2_data, istp, "
"ndf_w1, undf_w1, map_w1(:,cell), basis_w1_qr, ndf_w2, undf_w2, "
"map_w2(:,cell), diff_basis_w2_qr, ndf_w3, undf_w3, "
"map_w3(:,cell), basis_w3_qr, diff_basis_w3_qr, nedges_qr, "
- "np_xyz_qr, weights_xyz_qr)" in gen_code)
+ "np_xyz_qr, weights_xyz_qr)" in code)
def test_face_qr(tmpdir, dist_mem):
@@ -282,176 +287,190 @@ def test_face_qr(tmpdir, dist_mem):
_, invoke_info = parse(os.path.join(BASE_PATH, "1.1.6_face_qr.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=dist_mem).create(invoke_info)
- assert LFRicBuild(tmpdir).code_compiles(psy)
generated_code = str(psy.gen)
module_declns = (
- " USE constants_mod, ONLY: r_def, i_def\n"
- " USE field_mod, ONLY: field_type, field_proxy_type\n")
+ " use constants_mod\n"
+ " use field_mod, only : field_proxy_type, field_type\n")
assert module_declns in generated_code
- output_decls = (
- " USE testkern_qr_faces_mod, ONLY: testkern_qr_faces_code\n"
- " USE quadrature_face_mod, ONLY: quadrature_face_type, "
- "quadrature_face_proxy_type\n"
- " USE function_space_mod, ONLY: BASIS, DIFF_BASIS\n")
+ output_decls = ""
if dist_mem:
- output_decls += " USE mesh_mod, ONLY: mesh_type\n"
+ output_decls += " use mesh_mod, only : mesh_type\n"
output_decls += (
- " TYPE(field_type), intent(in) :: f1, f2, m1, m2\n"
- " TYPE(quadrature_face_type), intent(in) :: qr\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " REAL(KIND=r_def), allocatable :: basis_w1_qr(:,:,:,:), "
- "diff_basis_w2_qr(:,:,:,:), basis_w3_qr(:,:,:,:), "
- "diff_basis_w3_qr(:,:,:,:)\n"
- " INTEGER(KIND=i_def) dim_w1, diff_dim_w2, dim_w3, diff_dim_w3\n"
- " REAL(KIND=r_def), pointer :: weights_xyz_qr(:,:) => null()\n"
- " INTEGER(KIND=i_def) np_xyz_qr, nfaces_qr\n"
- " INTEGER(KIND=i_def) nlayers_f1\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: m1_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f2_data => null()\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: f1_data => null()\n"
- " TYPE(field_proxy_type) f1_proxy, f2_proxy, m1_proxy, m2_proxy\n"
- " TYPE(quadrature_face_proxy_type) qr_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w1(:,:) => null(), "
- "map_w2(:,:) => null(), map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w1, undf_w1, ndf_w2, undf_w2, ndf_w3, "
- "undf_w3\n")
+ " use function_space_mod, only : BASIS, DIFF_BASIS\n"
+ " use quadrature_face_mod, only : quadrature_face_proxy_type, "
+ "quadrature_face_type\n"
+ " use testkern_qr_faces_mod, only : testkern_qr_faces_code\n")
assert output_decls in generated_code
+ assert """\
+ type(field_type), intent(in) :: f1
+ type(field_type), intent(in) :: f2
+ type(field_type), intent(in) :: m1
+ type(field_type), intent(in) :: m2
+ type(quadrature_face_type), intent(in) :: qr
+ integer(kind=i_def) :: cell
+ integer(kind=i_def) :: loop0_start
+ integer(kind=i_def) :: loop0_stop
+""" in generated_code
+
+ if dist_mem:
+ assert """\
+
+ type(mesh_type), pointer :: mesh => null()
+ integer(kind=i_def) :: max_halo_depth_mesh""" in generated_code
+
+ assert """\
+ real(kind=r_def), pointer, dimension(:) :: f1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: f2_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m1_data => null()
+ real(kind=r_def), pointer, dimension(:) :: m2_data => null()
+ integer(kind=i_def) :: nlayers_f1
+ integer(kind=i_def) :: ndf_w1
+ integer(kind=i_def) :: undf_w1
+ integer(kind=i_def) :: ndf_w2
+ integer(kind=i_def) :: undf_w2
+ integer(kind=i_def) :: ndf_w3
+ integer(kind=i_def) :: undf_w3
+ integer(kind=i_def), pointer :: map_w1(:,:) => null()
+ integer(kind=i_def), pointer :: map_w2(:,:) => null()
+ integer(kind=i_def), pointer :: map_w3(:,:) => null()
+ type(field_proxy_type) :: f1_proxy
+ type(field_proxy_type) :: f2_proxy
+ type(field_proxy_type) :: m1_proxy
+ type(field_proxy_type) :: m2_proxy
+ integer(kind=i_def) :: np_xyz_qr
+ integer(kind=i_def) :: nfaces_qr
+ real(kind=r_def), pointer, dimension(:,:) :: weights_xyz_qr => null()
+
+ type(quadrature_face_proxy_type) :: qr_proxy
+ integer(kind=i_def) :: dim_w1
+ integer(kind=i_def) :: diff_dim_w2
+ integer(kind=i_def) :: dim_w3
+ integer(kind=i_def) :: diff_dim_w3
+ real(kind=r_def), allocatable :: basis_w1_qr(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_w2_qr(:,:,:,:)
+ real(kind=r_def), allocatable :: basis_w3_qr(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_w3_qr(:,:,:,:)
+""" in generated_code
init_output = (
- " !\n"
- " ! Initialise field and/or operator proxies\n"
- " !\n"
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " f2_proxy = f2%get_proxy()\n"
- " f2_data => f2_proxy%data\n"
- " m1_proxy = m1%get_proxy()\n"
- " m1_data => m1_proxy%data\n"
- " m2_proxy = m2%get_proxy()\n"
- " m2_data => m2_proxy%data\n"
- " !\n"
- " ! Initialise number of layers\n"
- " !\n"
- " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
- " !\n")
+ "\n"
+ " ! Initialise field and/or operator proxies\n"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ " f2_proxy = f2%get_proxy()\n"
+ " f2_data => f2_proxy%data\n"
+ " m1_proxy = m1%get_proxy()\n"
+ " m1_data => m1_proxy%data\n"
+ " m2_proxy = m2%get_proxy()\n"
+ " m2_data => m2_proxy%data\n"
+ "\n"
+ " ! Initialise number of layers\n"
+ " nlayers_f1 = f1_proxy%vspace%get_nlayers()\n"
+ "\n")
if dist_mem:
- init_output += (" ! Create a mesh object\n"
- " !\n"
- " mesh => f1_proxy%vspace%get_mesh()\n"
- " max_halo_depth_mesh = mesh%get_halo_depth()\n"
- " !\n")
+ init_output += (" ! Create a mesh object\n"
+ " mesh => f1_proxy%vspace%get_mesh()\n"
+ " max_halo_depth_mesh = mesh%get_halo_depth()\n"
+ "\n")
init_output += (
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for w1\n"
- " !\n"
- " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
- " undf_w1 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w2\n"
- " !\n"
- " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
- " undf_w2 = f2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Initialise number of DoFs for w3\n"
- " !\n"
- " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
- " undf_w3 = m2_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Look-up quadrature variables\n"
- " !\n"
- " qr_proxy = qr%get_quadrature_proxy()\n"
- " np_xyz_qr = qr_proxy%np_xyz\n"
- " nfaces_qr = qr_proxy%nfaces\n"
- " weights_xyz_qr => qr_proxy%weights_xyz\n")
+ " ! Look-up dofmaps for each function space\n"
+ " map_w1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_w2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w3 => m2_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for w1\n"
+ " ndf_w1 = f1_proxy%vspace%get_ndf()\n"
+ " undf_w1 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w2\n"
+ " ndf_w2 = f2_proxy%vspace%get_ndf()\n"
+ " undf_w2 = f2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Initialise number of DoFs for w3\n"
+ " ndf_w3 = m2_proxy%vspace%get_ndf()\n"
+ " undf_w3 = m2_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Look-up quadrature variables\n"
+ " qr_proxy = qr%get_quadrature_proxy()\n"
+ " np_xyz_qr = qr_proxy%np_xyz\n"
+ " nfaces_qr = qr_proxy%nfaces\n"
+ " weights_xyz_qr => qr_proxy%weights_xyz\n")
assert init_output in generated_code
init_output2 = (
- " !\n"
- " ! Allocate basis/diff-basis arrays\n"
- " !\n"
- " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
- " diff_dim_w2 = f2_proxy%vspace%get_dim_space_diff()\n"
- " dim_w3 = m2_proxy%vspace%get_dim_space()\n"
- " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n"
- " ALLOCATE (basis_w1_qr(dim_w1, ndf_w1, np_xyz_qr, nfaces_qr))\n"
- " ALLOCATE (diff_basis_w2_qr(diff_dim_w2, ndf_w2, np_xyz_qr, "
+ "\n"
+ " ! Allocate basis/diff-basis arrays\n"
+ " dim_w1 = f1_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w2 = f2_proxy%vspace%get_dim_space_diff()\n"
+ " dim_w3 = m2_proxy%vspace%get_dim_space()\n"
+ " diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()\n"
+ " ALLOCATE(basis_w1_qr(dim_w1,ndf_w1,np_xyz_qr,nfaces_qr))\n"
+ " ALLOCATE(diff_basis_w2_qr(diff_dim_w2,ndf_w2,np_xyz_qr,"
"nfaces_qr))\n"
- " ALLOCATE (basis_w3_qr(dim_w3, ndf_w3, np_xyz_qr, nfaces_qr))\n"
- " ALLOCATE (diff_basis_w3_qr(diff_dim_w3, ndf_w3, np_xyz_qr, "
+ " ALLOCATE(basis_w3_qr(dim_w3,ndf_w3,np_xyz_qr,nfaces_qr))\n"
+ " ALLOCATE(diff_basis_w3_qr(diff_dim_w3,ndf_w3,np_xyz_qr,"
"nfaces_qr))\n"
- " !\n"
- " ! Compute basis/diff-basis arrays\n"
- " !\n"
- " CALL qr%compute_function(BASIS, f1_proxy%vspace, dim_w1, "
+ "\n"
+ " ! Compute basis/diff-basis arrays\n"
+ " call qr%compute_function(BASIS, f1_proxy%vspace, dim_w1, "
"ndf_w1, basis_w1_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, f2_proxy%vspace, "
+ " call qr%compute_function(DIFF_BASIS, f2_proxy%vspace, "
"diff_dim_w2, ndf_w2, diff_basis_w2_qr)\n"
- " CALL qr%compute_function(BASIS, m2_proxy%vspace, dim_w3, "
+ " call qr%compute_function(BASIS, m2_proxy%vspace, dim_w3, "
"ndf_w3, basis_w3_qr)\n"
- " CALL qr%compute_function(DIFF_BASIS, m2_proxy%vspace, "
+ " call qr%compute_function(DIFF_BASIS, m2_proxy%vspace, "
"diff_dim_w3, ndf_w3, diff_basis_w3_qr)\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n")
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n")
if dist_mem:
init_output2 += (
- " loop0_stop = mesh%get_last_halo_cell(1)\n"
- " !\n"
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (m2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL m2_proxy%halo_exchange(depth=1)\n"
- " END IF\n")
+ " loop0_stop = mesh%get_last_halo_cell(1)\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m1_proxy%is_dirty(depth=1)) then\n"
+ " call m1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (m2_proxy%is_dirty(depth=1)) then\n"
+ " call m2_proxy%halo_exchange(depth=1)\n"
+ " end if\n")
else:
init_output2 += (
- " loop0_stop = f1_proxy%vspace%get_ncell()\n"
- " !\n"
- " ! Call our kernels\n")
+ " loop0_stop = f1_proxy%vspace%get_ncell()\n"
+ "\n"
+ " ! Call kernels\n")
assert init_output2 in generated_code
compute_output = (
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_qr_faces_code(nlayers_f1, f1_data, f2_data, "
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_qr_faces_code(nlayers_f1, f1_data, f2_data, "
"m1_data, m2_data, ndf_w1, undf_w1, "
"map_w1(:,cell), basis_w1_qr, ndf_w2, undf_w2, map_w2(:,cell), "
"diff_basis_w2_qr, ndf_w3, undf_w3, map_w3(:,cell), basis_w3_qr, "
"diff_basis_w3_qr, nfaces_qr, np_xyz_qr, weights_xyz_qr)\n"
- " END DO\n")
+ " enddo\n")
if dist_mem:
compute_output += (
- " !\n"
- " ! Set halos dirty/clean for fields modified in the above "
- "loop\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " !\n")
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the above "
+ "loop(s)\n"
+ " call f1_proxy%set_dirty()\n")
compute_output += (
- " !\n"
- " ! Deallocate basis arrays\n"
- " !\n"
- " DEALLOCATE (basis_w1_qr, basis_w3_qr, diff_basis_w2_qr, "
+ "\n"
+ " ! Deallocate basis arrays\n"
+ " DEALLOCATE(basis_w1_qr, basis_w3_qr, diff_basis_w2_qr, "
"diff_basis_w3_qr)\n"
- " !\n"
- " END SUBROUTINE invoke_0_testkern_qr_faces_type"
+ "\n"
+ " end subroutine invoke_0_testkern_qr_faces_type"
)
assert compute_output in generated_code
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_face_and_edge_qr(dist_mem, tmpdir):
@@ -461,47 +480,64 @@ def test_face_and_edge_qr(dist_mem, tmpdir):
"1.1.7_face_and_edge_qr.f90"),
api=API)
psy = PSyFactory(API, distributed_memory=dist_mem).create(invoke_info)
- assert LFRicBuild(tmpdir).code_compiles(psy)
- gen_code = str(psy.gen)
+ code = str(psy.gen)
+
# Check that the qr-related variables are all declared
- assert (" TYPE(quadrature_face_type), intent(in) :: qr_face\n"
- " TYPE(quadrature_edge_type), intent(in) :: qr_edge\n"
- in gen_code)
- assert ("REAL(KIND=r_def), allocatable :: basis_w1_qr_face(:,:,:,:), "
- "basis_w1_qr_edge(:,:,:,:), diff_basis_w2_qr_face(:,:,:,:), "
- "diff_basis_w2_qr_edge(:,:,:,:), basis_w3_qr_face(:,:,:,:), "
- "diff_basis_w3_qr_face(:,:,:,:), basis_w3_qr_edge(:,:,:,:), "
- "diff_basis_w3_qr_edge(:,:,:,:)" in gen_code)
- assert (" REAL(KIND=r_def), pointer :: weights_xyz_qr_edge(:,:) "
- "=> null()\n"
- " INTEGER(KIND=i_def) np_xyz_qr_edge, nedges_qr_edge\n"
- " REAL(KIND=r_def), pointer :: weights_xyz_qr_face(:,:) "
- "=> null()\n"
- " INTEGER(KIND=i_def) np_xyz_qr_face, nfaces_qr_face\n"
- in gen_code)
- assert (" TYPE(quadrature_edge_proxy_type) qr_edge_proxy\n"
- " TYPE(quadrature_face_proxy_type) qr_face_proxy\n"
- in gen_code)
+ assert (" type(quadrature_face_type), intent(in) :: qr_face\n"
+ " type(quadrature_edge_type), intent(in) :: qr_edge\n"
+ in code)
+ assert """
+ real(kind=r_def), allocatable :: basis_w1_qr_face(:,:,:,:)
+ real(kind=r_def), allocatable :: basis_w1_qr_edge(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_w2_qr_face(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_w2_qr_edge(:,:,:,:)
+ real(kind=r_def), allocatable :: basis_w3_qr_face(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_w3_qr_face(:,:,:,:)
+ real(kind=r_def), allocatable :: basis_w3_qr_edge(:,:,:,:)
+ real(kind=r_def), allocatable :: diff_basis_w3_qr_edge(:,:,:,:)
+""" in code
+ assert """
+ integer(kind=i_def) :: np_xyz_qr_face
+ integer(kind=i_def) :: nfaces_qr_face
+ real(kind=r_def), pointer, dimension(:,:) :: weights_xyz_qr_face => null()
+
+ type(quadrature_face_proxy_type) :: qr_face_proxy
+ integer(kind=i_def) :: np_xyz_qr_edge
+ integer(kind=i_def) :: nedges_qr_edge
+ real(kind=r_def), pointer, dimension(:,:) :: weights_xyz_qr_edge => null()
+
+ type(quadrature_edge_proxy_type) :: qr_edge_proxy
+ """ in code
# Allocation and computation of (some of) the basis functions
- assert (" ALLOCATE (basis_w3_qr_face(dim_w3, ndf_w3, np_xyz_qr_face,"
- " nfaces_qr_face))\n"
- " ALLOCATE (diff_basis_w3_qr_face(diff_dim_w3, ndf_w3, "
- "np_xyz_qr_face, nfaces_qr_face))\n"
- " ALLOCATE (basis_w3_qr_edge(dim_w3, ndf_w3, np_xyz_qr_edge, "
- "nedges_qr_edge))\n"
- " ALLOCATE (diff_basis_w3_qr_edge(diff_dim_w3, ndf_w3, "
- "np_xyz_qr_edge, nedges_qr_edge))\n" in gen_code)
- assert (" CALL qr_face%compute_function(BASIS, m2_proxy%vspace, "
+ assert """
+ ! Allocate basis/diff-basis arrays
+ dim_w1 = f1_proxy%vspace%get_dim_space()
+ diff_dim_w2 = f2_proxy%vspace%get_dim_space_diff()
+ dim_w3 = m2_proxy%vspace%get_dim_space()
+ diff_dim_w3 = m2_proxy%vspace%get_dim_space_diff()
+ ALLOCATE(basis_w1_qr_face(dim_w1,ndf_w1,np_xyz_qr_face,nfaces_qr_face))
+ ALLOCATE(basis_w1_qr_edge(dim_w1,ndf_w1,np_xyz_qr_edge,nedges_qr_edge))
+ ALLOCATE(diff_basis_w2_qr_face(diff_dim_w2,ndf_w2,np_xyz_qr_face,\
+nfaces_qr_face))
+ ALLOCATE(diff_basis_w2_qr_edge(diff_dim_w2,ndf_w2,np_xyz_qr_edge,\
+nedges_qr_edge))
+ ALLOCATE(basis_w3_qr_face(dim_w3,ndf_w3,np_xyz_qr_face,nfaces_qr_face))
+ ALLOCATE(diff_basis_w3_qr_face(diff_dim_w3,ndf_w3,np_xyz_qr_face,\
+nfaces_qr_face))
+ ALLOCATE(basis_w3_qr_edge(dim_w3,ndf_w3,np_xyz_qr_edge,nedges_qr_edge))
+ ALLOCATE(diff_basis_w3_qr_edge(diff_dim_w3,ndf_w3,np_xyz_qr_edge,\
+nedges_qr_edge))""" in code
+ assert (" call qr_face%compute_function(BASIS, m2_proxy%vspace, "
"dim_w3, ndf_w3, basis_w3_qr_face)\n"
- " CALL qr_face%compute_function(DIFF_BASIS, m2_proxy%vspace, "
+ " call qr_face%compute_function(DIFF_BASIS, m2_proxy%vspace, "
"diff_dim_w3, ndf_w3, diff_basis_w3_qr_face)\n"
- " CALL qr_edge%compute_function(BASIS, m2_proxy%vspace, "
+ " call qr_edge%compute_function(BASIS, m2_proxy%vspace, "
"dim_w3, ndf_w3, basis_w3_qr_edge)\n"
- " CALL qr_edge%compute_function(DIFF_BASIS, m2_proxy%vspace, "
- "diff_dim_w3, ndf_w3, diff_basis_w3_qr_edge)\n" in gen_code)
+ " call qr_edge%compute_function(DIFF_BASIS, m2_proxy%vspace, "
+ "diff_dim_w3, ndf_w3, diff_basis_w3_qr_edge)\n" in code)
# Check that the kernel call itself is correct
assert (
- "CALL testkern_2qr_code(nlayers_f1, f1_data, f2_data, "
+ "call testkern_2qr_code(nlayers_f1, f1_data, f2_data, "
"m1_data, m2_data, "
"ndf_w1, undf_w1, map_w1(:,cell), basis_w1_qr_face, basis_w1_qr_edge, "
"ndf_w2, undf_w2, map_w2(:,cell), diff_basis_w2_qr_face, "
@@ -509,7 +545,8 @@ def test_face_and_edge_qr(dist_mem, tmpdir):
"ndf_w3, undf_w3, map_w3(:,cell), basis_w3_qr_face, basis_w3_qr_edge, "
"diff_basis_w3_qr_face, diff_basis_w3_qr_edge, "
"nfaces_qr_face, np_xyz_qr_face, weights_xyz_qr_face, "
- "nedges_qr_edge, np_xyz_qr_edge, weights_xyz_qr_edge)" in gen_code)
+ "nedges_qr_edge, np_xyz_qr_edge, weights_xyz_qr_edge)" in code)
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_field_qr_deref(tmpdir):
@@ -527,9 +564,9 @@ def test_field_qr_deref(tmpdir):
gen = str(psy.gen)
assert (
- " SUBROUTINE invoke_0_testkern_qr_type(f1, f2, m1, a, m2, istp,"
+ " subroutine invoke_0_testkern_qr_type(f1, f2, m1, a, m2, istp,"
" unit_cube_qr_xyoz)\n" in gen)
- assert ("TYPE(quadrature_xyoz_type), intent(in) :: unit_cube_qr_xyoz"
+ assert ("type(quadrature_xyoz_type), intent(in) :: unit_cube_qr_xyoz"
in gen)
@@ -630,17 +667,21 @@ def test_dynbasisfns_initialise(monkeypatch):
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
dinf = DynBasisFunctions(psy.invokes.invoke_list[0])
- mod = ModuleGen(name="testmodule")
+ # We need some pre-declared symbol in order to call the initialise directly
+ for name in ["quadrature_xyoz_proxy_type", "qr_proxy", "f1_proxy",
+ "f2_proxy", "m2_proxy"]:
+ psy.container.children[0].symbol_table.add(
+ DataSymbol(name, UnresolvedType()))
# Break the shape of the first basis function
dinf._basis_fns[0]["shape"] = "not-a-shape"
with pytest.raises(InternalError) as err:
- dinf.initialise(mod)
+ dinf.initialise(0)
assert ("Unrecognised evaluator shape: 'not-a-shape'. Should be "
"one of " in str(err.value))
# Break the internal list of basis functions
monkeypatch.setattr(dinf, "_basis_fns", [{'type': 'not-a-type'}])
with pytest.raises(InternalError) as err:
- dinf.initialise(mod)
+ dinf.initialise(0)
assert ("Unrecognised type of basis function: 'not-a-type'. Should be "
"either 'basis' or 'diff-basis'" in str(err.value))
@@ -654,18 +695,17 @@ def test_dynbasisfns_compute(monkeypatch):
api=API)
psy = PSyFactory(API, distributed_memory=False).create(invoke_info)
dinf = DynBasisFunctions(psy.invokes.invoke_list[0])
- mod = ModuleGen(name="testmodule")
# First supply an invalid shape for one of the basis functions
dinf._basis_fns[0]["shape"] = "not-a-shape"
with pytest.raises(InternalError) as err:
- dinf._compute_basis_fns(mod)
+ dinf._compute_basis_fns(0)
assert ("Unrecognised shape 'not-a-shape' specified for basis function. "
"Should be one of: ['gh_quadrature_xyoz', "
in str(err.value))
# Now supply an invalid type for one of the basis functions
monkeypatch.setattr(dinf, "_basis_fns", [{'type': 'not-a-type'}])
with pytest.raises(InternalError) as err:
- dinf._compute_basis_fns(mod)
+ dinf._compute_basis_fns(0)
assert ("Unrecognised type of basis function: 'not-a-type'. Expected "
"one of 'basis' or 'diff-basis'" in str(err.value))
@@ -682,11 +722,10 @@ def test_dynbasisfns_dealloc(monkeypatch):
call = sched.children[0].loop_body[0]
assert isinstance(call, LFRicKern)
dinf = DynBasisFunctions(psy.invokes.invoke_list[0])
- mod = ModuleGen(name="testmodule")
# Supply an invalid type for one of the basis functions
monkeypatch.setattr(dinf, "_basis_fns", [{'type': 'not-a-type'}])
with pytest.raises(InternalError) as err:
- dinf.deallocate(mod)
+ dinf.deallocate()
assert ("Unrecognised type of basis function: 'not-a-type'. Should be "
"one of 'basis' or 'diff-basis'" in str(err.value))
@@ -768,7 +807,7 @@ def test_lfrickern_setup(monkeypatch):
'''
-def test_qr_basis_stub():
+def test_qr_basis_stub(fortran_writer):
''' Test that basis functions for quadrature are handled correctly for
kernel stubs.
@@ -777,110 +816,111 @@ def test_qr_basis_stub():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
- output = (
- " MODULE dummy_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE dummy_code(cell, nlayers, field_1_w0, op_2_ncell_3d, "
- "op_2, field_3_w2, op_4_ncell_3d, op_4, field_5_wtheta, "
- "op_6_ncell_3d, op_6, field_7_w2v, op_8_ncell_3d, op_8, field_9_wchi, "
- "op_10_ncell_3d, op_10, field_11_w2htrace, op_12_ncell_3d, op_12, "
- "ndf_w0, undf_w0, map_w0, basis_w0_qr_xyoz, ndf_w1, basis_w1_qr_xyoz, "
- "ndf_w2, undf_w2, map_w2, basis_w2_qr_xyoz, ndf_w3, basis_w3_qr_xyoz, "
- "ndf_wtheta, undf_wtheta, map_wtheta, basis_wtheta_qr_xyoz, ndf_w2h, "
- "basis_w2h_qr_xyoz, ndf_w2v, undf_w2v, map_w2v, basis_w2v_qr_xyoz, "
- "ndf_w2broken, basis_w2broken_qr_xyoz, ndf_wchi, undf_wchi, map_wchi, "
- "basis_wchi_qr_xyoz, ndf_w2trace, basis_w2trace_qr_xyoz, "
- "ndf_w2htrace, undf_w2htrace, map_w2htrace, basis_w2htrace_qr_xyoz, "
- "ndf_w2vtrace, basis_w2vtrace_qr_xyoz, np_xy_qr_xyoz, np_z_qr_xyoz, "
- "weights_xy_qr_xyoz, weights_z_qr_xyoz)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w0\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w0) :: map_w0\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2) :: map_w2\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2htrace\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2htrace) "
- ":: map_w2htrace\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2v\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2v) "
- ":: map_w2v\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wchi\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wchi) "
- ":: map_wchi\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wtheta\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wtheta) "
- ":: map_wtheta\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w0, ndf_w1, undf_w2, "
- "ndf_w3, undf_wtheta, ndf_w2h, undf_w2v, ndf_w2broken, undf_wchi, "
- "ndf_w2trace, undf_w2htrace, ndf_w2vtrace\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w0) "
- ":: field_1_w0\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2) "
- ":: field_3_w2\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_wtheta) "
- ":: field_5_wtheta\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w2v) "
- ":: field_7_w2v\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_wchi) "
- ":: field_9_wchi\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w2htrace) "
- ":: field_11_w2htrace\n"
- " INTEGER(KIND=i_def), intent(in) :: cell\n"
- " INTEGER(KIND=i_def), intent(in) :: op_2_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_2_ncell_3d,"
- "ndf_w1,ndf_w1) :: op_2\n"
- " INTEGER(KIND=i_def), intent(in) :: op_4_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_4_ncell_3d,"
- "ndf_w3,ndf_w3) :: op_4\n"
- " INTEGER(KIND=i_def), intent(in) :: op_6_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_6_ncell_3d,"
- "ndf_w2h,ndf_w2h) :: op_6\n"
- " INTEGER(KIND=i_def), intent(in) :: op_8_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_8_ncell_3d,"
- "ndf_w2broken,ndf_w2broken) :: op_8\n"
- " INTEGER(KIND=i_def), intent(in) :: op_10_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension(op_10_ncell_3d,"
- "ndf_w2trace,ndf_w2trace) :: op_10\n"
- " INTEGER(KIND=i_def), intent(in) :: op_12_ncell_3d\n"
- " REAL(KIND=r_def), intent(in), dimension(op_12_ncell_3d,"
- "ndf_w2vtrace,ndf_w2vtrace) :: op_12\n"
- " INTEGER(KIND=i_def), intent(in) :: np_xy_qr_xyoz, "
- "np_z_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w0,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w0_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w1,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w1_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w2_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w3,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w3_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_wtheta,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_wtheta_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2h,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w2h_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2v,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w2v_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w2broken,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w2broken_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_wchi,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_wchi_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2trace,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w2trace_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2htrace,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w2htrace_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2vtrace,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_w2vtrace_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_xy_qr_xyoz) "
- ":: weights_xy_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(np_z_qr_xyoz) "
- ":: weights_z_qr_xyoz\n"
- " END SUBROUTINE dummy_code\n"
- " END MODULE dummy_mod")
- assert output in generated_code
+ generated_code = fortran_writer(kernel.gen_stub)
+ assert """\
+module dummy_mod
+ implicit none
+ public
+
+ contains
+ subroutine dummy_code(cell, nlayers, field_1_w0, op_2_ncell_3d, op_2, \
+field_3_w2, op_4_ncell_3d, op_4, field_5_wtheta, op_6_ncell_3d, op_6, \
+field_7_w2v, op_8_ncell_3d, op_8, field_9_wchi, op_10_ncell_3d, op_10, \
+field_11_w2htrace, op_12_ncell_3d, op_12, ndf_w0, undf_w0, map_w0, \
+basis_w0_qr_xyoz, ndf_w1, basis_w1_qr_xyoz, ndf_w2, undf_w2, map_w2, \
+basis_w2_qr_xyoz, ndf_w3, basis_w3_qr_xyoz, ndf_wtheta, undf_wtheta, \
+map_wtheta, basis_wtheta_qr_xyoz, ndf_w2h, basis_w2h_qr_xyoz, ndf_w2v, \
+undf_w2v, map_w2v, basis_w2v_qr_xyoz, ndf_w2broken, basis_w2broken_qr_xyoz, \
+ndf_wchi, undf_wchi, map_wchi, basis_wchi_qr_xyoz, ndf_w2trace, \
+basis_w2trace_qr_xyoz, ndf_w2htrace, undf_w2htrace, map_w2htrace, \
+basis_w2htrace_qr_xyoz, ndf_w2vtrace, basis_w2vtrace_qr_xyoz, np_xy_qr_xyoz, \
+np_z_qr_xyoz, weights_xy_qr_xyoz, weights_z_qr_xyoz)
+ use constants_mod
+ integer(kind=i_def), intent(in) :: nlayers
+ integer(kind=i_def), intent(in) :: ndf_w0
+ integer(kind=i_def), dimension(ndf_w0), intent(in) :: map_w0
+ integer(kind=i_def), intent(in) :: ndf_w2
+ integer(kind=i_def), dimension(ndf_w2), intent(in) :: map_w2
+ integer(kind=i_def), intent(in) :: ndf_w2htrace
+ integer(kind=i_def), dimension(ndf_w2htrace), intent(in) :: map_w2htrace
+ integer(kind=i_def), intent(in) :: ndf_w2v
+ integer(kind=i_def), dimension(ndf_w2v), intent(in) :: map_w2v
+ integer(kind=i_def), intent(in) :: ndf_wchi
+ integer(kind=i_def), dimension(ndf_wchi), intent(in) :: map_wchi
+ integer(kind=i_def), intent(in) :: ndf_wtheta
+ integer(kind=i_def), dimension(ndf_wtheta), intent(in) :: map_wtheta
+ integer(kind=i_def), intent(in) :: undf_w0
+ integer(kind=i_def), intent(in) :: ndf_w1
+ integer(kind=i_def), intent(in) :: undf_w2
+ integer(kind=i_def), intent(in) :: ndf_w3
+ integer(kind=i_def), intent(in) :: undf_wtheta
+ integer(kind=i_def), intent(in) :: ndf_w2h
+ integer(kind=i_def), intent(in) :: undf_w2v
+ integer(kind=i_def), intent(in) :: ndf_w2broken
+ integer(kind=i_def), intent(in) :: undf_wchi
+ integer(kind=i_def), intent(in) :: ndf_w2trace
+ integer(kind=i_def), intent(in) :: undf_w2htrace
+ integer(kind=i_def), intent(in) :: ndf_w2vtrace
+ real(kind=r_def), dimension(undf_w0), intent(inout) :: field_1_w0
+ real(kind=r_def), dimension(undf_w2), intent(in) :: field_3_w2
+ real(kind=r_def), dimension(undf_wtheta), intent(inout) :: field_5_wtheta
+ real(kind=r_def), dimension(undf_w2v), intent(in) :: field_7_w2v
+ real(kind=r_def), dimension(undf_wchi), intent(in) :: field_9_wchi
+ real(kind=r_def), dimension(undf_w2htrace), intent(inout) :: \
+field_11_w2htrace
+ integer(kind=i_def), intent(in) :: cell
+ integer(kind=i_def), intent(in) :: op_2_ncell_3d
+ real(kind=r_def), dimension(op_2_ncell_3d,ndf_w1,ndf_w1), intent(inout) \
+:: op_2
+ integer(kind=i_def), intent(in) :: op_4_ncell_3d
+ real(kind=r_def), dimension(op_4_ncell_3d,ndf_w3,ndf_w3), intent(inout) \
+:: op_4
+ integer(kind=i_def), intent(in) :: op_6_ncell_3d
+ real(kind=r_def), dimension(op_6_ncell_3d,ndf_w2h,ndf_w2h), intent(inout) \
+:: op_6
+ integer(kind=i_def), intent(in) :: op_8_ncell_3d
+ real(kind=r_def), dimension(op_8_ncell_3d,ndf_w2broken,ndf_w2broken), \
+intent(inout) :: op_8
+ integer(kind=i_def), intent(in) :: op_10_ncell_3d
+ real(kind=r_def), dimension(op_10_ncell_3d,ndf_w2trace,ndf_w2trace), \
+intent(inout) :: op_10
+ integer(kind=i_def), intent(in) :: op_12_ncell_3d
+ real(kind=r_def), dimension(op_12_ncell_3d,ndf_w2vtrace,ndf_w2vtrace), \
+intent(in) :: op_12
+ integer(kind=i_def), intent(in) :: np_xy_qr_xyoz
+ integer(kind=i_def), intent(in) :: np_z_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_w0,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w0_qr_xyoz
+ real(kind=r_def), dimension(3,ndf_w1,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w1_qr_xyoz
+ real(kind=r_def), dimension(3,ndf_w2,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w2_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_w3,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w3_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_wtheta,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_wtheta_qr_xyoz
+ real(kind=r_def), dimension(3,ndf_w2h,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w2h_qr_xyoz
+ real(kind=r_def), dimension(3,ndf_w2v,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w2v_qr_xyoz
+ real(kind=r_def), dimension(3,ndf_w2broken,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w2broken_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_wchi,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_wchi_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_w2trace,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w2trace_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_w2htrace,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w2htrace_qr_xyoz
+ real(kind=r_def), dimension(1,ndf_w2vtrace,np_xy_qr_xyoz,np_z_qr_xyoz), \
+intent(in) :: basis_w2vtrace_qr_xyoz
+ real(kind=r_def), dimension(np_xy_qr_xyoz), intent(in) :: \
+weights_xy_qr_xyoz
+ real(kind=r_def), dimension(np_z_qr_xyoz), intent(in) :: weights_z_qr_xyoz
+
+
+ end subroutine dummy_code
+
+end module dummy_mod\n""" == generated_code
def test_stub_basis_wrong_shape(monkeypatch):
diff --git a/src/psyclone/tests/dynamo0p3_stubgen_test.py b/src/psyclone/tests/dynamo0p3_stubgen_test.py
index 0c5482257b..5d6d9a2cd4 100644
--- a/src/psyclone/tests/dynamo0p3_stubgen_test.py
+++ b/src/psyclone/tests/dynamo0p3_stubgen_test.py
@@ -93,36 +93,40 @@ def test_stub_generate_with_anyw2():
"testkern_multi_anyw2_basis_mod.f90"),
api=TEST_API)
expected_output = (
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_any_w2,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: basis_any_w2_qr_xyoz\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_any_w2,"
- "np_xy_qr_xyoz,np_z_qr_xyoz) :: diff_basis_any_w2_qr_xyoz")
- assert expected_output in str(result)
+ " real(kind=r_def), dimension(3,ndf_any_w2,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: basis_any_w2_qr_xyoz\n"
+ " real(kind=r_def), dimension(1,ndf_any_w2,np_xy_qr_xyoz,"
+ "np_z_qr_xyoz), intent(in) :: diff_basis_any_w2_qr_xyoz")
+ assert expected_output in result
SIMPLE = (
- " MODULE simple_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE simple_code(nlayers, field_1_w1, ndf_w1, undf_w1,"
+ "module simple_mod\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ " subroutine simple_code(nlayers, field_1_w1, ndf_w1, undf_w1,"
" map_w1)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w1\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w1) :: map_w1\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w1\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w1) :: "
+ " use constants_mod\n"
+ " integer(kind=i_def), intent(in) :: nlayers\n"
+ " integer(kind=i_def), intent(in) :: ndf_w1\n"
+ " integer(kind=i_def), dimension(ndf_w1), intent(in) :: map_w1\n"
+ " integer(kind=i_def), intent(in) :: undf_w1\n"
+ " real(kind=r_def), dimension(undf_w1), intent(inout) :: "
"field_1_w1\n"
- " END SUBROUTINE simple_code\n"
- " END MODULE simple_mod")
+ "\n"
+ "\n"
+ " end subroutine simple_code\n"
+ "\n"
+ "end module simple_mod\n")
def test_stub_generate_working():
''' Check that the stub generate produces the expected output '''
result = generate(os.path.join(BASE_PATH, "testkern_simple_mod.f90"),
api=TEST_API)
- assert SIMPLE in str(result)
+ assert SIMPLE == result
# Fields : intent
@@ -162,7 +166,7 @@ def test_load_meta_wrong_type():
f"'gh_hedge'" in str(excinfo.value))
-def test_intent():
+def test_intent(fortran_writer):
''' test that field intent is generated correctly for kernel stubs '''
ast = fpapi.parse(INTENT, ignore_comments=False)
metadata = LFRicKernMetadata(ast)
@@ -170,28 +174,33 @@ def test_intent():
kernel.load_meta(metadata)
generated_code = kernel.gen_stub
output = (
- " MODULE dummy_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE dummy_code(nlayers, field_1_w3, field_2_w1, "
+ "module dummy_mod\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ " subroutine dummy_code(nlayers, field_1_w3, field_2_w1, "
"field_3_w1, ndf_w3, undf_w3, map_w3, ndf_w1, undf_w1, map_w1)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w1\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w1) :: map_w1\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w3\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w3) :: map_w3\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w3, undf_w1\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w3) :: "
+ " use constants_mod\n"
+ " integer(kind=i_def), intent(in) :: nlayers\n"
+ " integer(kind=i_def), intent(in) :: ndf_w1\n"
+ " integer(kind=i_def), dimension(ndf_w1), intent(in) :: map_w1\n"
+ " integer(kind=i_def), intent(in) :: ndf_w3\n"
+ " integer(kind=i_def), dimension(ndf_w3), intent(in) :: map_w3\n"
+ " integer(kind=i_def), intent(in) :: undf_w3\n"
+ " integer(kind=i_def), intent(in) :: undf_w1\n"
+ " real(kind=r_def), dimension(undf_w3), intent(inout) :: "
"field_1_w3\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w1) :: "
+ " real(kind=r_def), dimension(undf_w1), intent(inout) :: "
"field_2_w1\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_w1) :: "
+ " real(kind=r_def), dimension(undf_w1), intent(in) :: "
"field_3_w1\n"
- " END SUBROUTINE dummy_code\n"
- " END MODULE dummy_mod")
- assert output in str(generated_code)
+ "\n"
+ "\n"
+ " end subroutine dummy_code\n"
+ "\n"
+ "end module dummy_mod\n")
+ assert output == fortran_writer(generated_code)
# Fields : spaces
@@ -223,7 +232,7 @@ def test_intent():
'''
-def test_spaces():
+def test_spaces(fortran_writer):
''' Test that field spaces are handled correctly for kernel stubs.
'''
@@ -231,12 +240,14 @@ def test_spaces():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
+ generated_code = fortran_writer(kernel.gen_stub)
output = (
- " MODULE dummy_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE dummy_code(nlayers, field_1_w0, field_2_w1, "
+ "module dummy_mod\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ " subroutine dummy_code(nlayers, field_1_w0, field_2_w1, "
"field_3_w2, field_4_w2broken, field_5_w2trace, field_6_w3, "
"field_7_wtheta, field_8_w2h, field_9_w2v, field_10_w2htrace, "
"field_11_w2vtrace, field_12_wchi, "
@@ -247,71 +258,82 @@ def test_spaces():
"ndf_w2v, undf_w2v, map_w2v, ndf_w2htrace, undf_w2htrace, "
"map_w2htrace, ndf_w2vtrace, undf_w2vtrace, map_w2vtrace, "
"ndf_wchi, undf_wchi, map_wchi)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w0\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w0) :: map_w0\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w1\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w1) :: map_w1\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2) :: map_w2\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2broken\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2broken) "
+ " use constants_mod\n"
+ " integer(kind=i_def), intent(in) :: nlayers\n"
+ " integer(kind=i_def), intent(in) :: ndf_w0\n"
+ " integer(kind=i_def), dimension(ndf_w0), intent(in) :: map_w0\n"
+ " integer(kind=i_def), intent(in) :: ndf_w1\n"
+ " integer(kind=i_def), dimension(ndf_w1), intent(in) :: map_w1\n"
+ " integer(kind=i_def), intent(in) :: ndf_w2\n"
+ " integer(kind=i_def), dimension(ndf_w2), intent(in) :: map_w2\n"
+ " integer(kind=i_def), intent(in) :: ndf_w2broken\n"
+ " integer(kind=i_def), dimension(ndf_w2broken), intent(in) "
":: map_w2broken\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2h\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2h) "
+ " integer(kind=i_def), intent(in) :: ndf_w2h\n"
+ " integer(kind=i_def), dimension(ndf_w2h), intent(in) "
":: map_w2h\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2htrace\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2htrace) "
+ " integer(kind=i_def), intent(in) :: ndf_w2htrace\n"
+ " integer(kind=i_def), dimension(ndf_w2htrace), intent(in) "
":: map_w2htrace\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2trace\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2trace) "
+ " integer(kind=i_def), intent(in) :: ndf_w2trace\n"
+ " integer(kind=i_def), dimension(ndf_w2trace), intent(in) "
":: map_w2trace\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2v\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2v) "
+ " integer(kind=i_def), intent(in) :: ndf_w2v\n"
+ " integer(kind=i_def), dimension(ndf_w2v), intent(in) "
":: map_w2v\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w2vtrace\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w2vtrace) "
+ " integer(kind=i_def), intent(in) :: ndf_w2vtrace\n"
+ " integer(kind=i_def), dimension(ndf_w2vtrace), intent(in) "
":: map_w2vtrace\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w3\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w3) :: map_w3\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wchi\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wchi) "
+ " integer(kind=i_def), intent(in) :: ndf_w3\n"
+ " integer(kind=i_def), dimension(ndf_w3), intent(in) :: map_w3\n"
+ " integer(kind=i_def), intent(in) :: ndf_wchi\n"
+ " integer(kind=i_def), dimension(ndf_wchi), intent(in) "
":: map_wchi\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_wtheta\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_wtheta) "
+ " integer(kind=i_def), intent(in) :: ndf_wtheta\n"
+ " integer(kind=i_def), dimension(ndf_wtheta), intent(in) "
":: map_wtheta\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w0, undf_w1, undf_w2, "
- "undf_w2broken, undf_w2trace, undf_w3, undf_wtheta, undf_w2h, "
- "undf_w2v, undf_w2htrace, undf_w2vtrace, undf_wchi\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w0) "
+ " integer(kind=i_def), intent(in) :: undf_w0\n"
+ " integer(kind=i_def), intent(in) :: undf_w1\n"
+ " integer(kind=i_def), intent(in) :: undf_w2\n"
+ " integer(kind=i_def), intent(in) :: undf_w2broken\n"
+ " integer(kind=i_def), intent(in) :: undf_w2trace\n"
+ " integer(kind=i_def), intent(in) :: undf_w3\n"
+ " integer(kind=i_def), intent(in) :: undf_wtheta\n"
+ " integer(kind=i_def), intent(in) :: undf_w2h\n"
+ " integer(kind=i_def), intent(in) :: undf_w2v\n"
+ " integer(kind=i_def), intent(in) :: undf_w2htrace\n"
+ " integer(kind=i_def), intent(in) :: undf_w2vtrace\n"
+ " integer(kind=i_def), intent(in) :: undf_wchi\n"
+ " real(kind=r_def), dimension(undf_w0), intent(inout) "
":: field_1_w0\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w1) "
+ " real(kind=r_def), dimension(undf_w1), intent(inout) "
":: field_2_w1\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w2) "
+ " real(kind=r_def), dimension(undf_w2), intent(inout) "
":: field_3_w2\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w2broken) "
+ " real(kind=r_def), dimension(undf_w2broken), intent(inout) "
":: field_4_w2broken\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w2trace) "
+ " real(kind=r_def), dimension(undf_w2trace), intent(inout) "
":: field_5_w2trace\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w3) "
+ " real(kind=r_def), dimension(undf_w3), intent(inout) "
":: field_6_w3\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_wtheta) "
+ " real(kind=r_def), dimension(undf_wtheta), intent(inout) "
":: field_7_wtheta\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w2h) "
+ " real(kind=r_def), dimension(undf_w2h), intent(inout) "
":: field_8_w2h\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w2v) "
+ " real(kind=r_def), dimension(undf_w2v), intent(inout) "
":: field_9_w2v\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w2htrace) "
+ " real(kind=r_def), dimension(undf_w2htrace), intent(inout) "
":: field_10_w2htrace\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w2vtrace) "
+ " real(kind=r_def), dimension(undf_w2vtrace), intent(inout) "
":: field_11_w2vtrace\n"
- " REAL(KIND=r_def), intent(in), dimension(undf_wchi) "
+ " real(kind=r_def), dimension(undf_wchi), intent(in) "
":: field_12_wchi\n"
- " END SUBROUTINE dummy_code\n"
- " END MODULE dummy_mod")
- assert output in generated_code
+ "\n"
+ "\n"
+ " end subroutine dummy_code\n"
+ "\n"
+ "end module dummy_mod\n")
+ assert output == generated_code
ANY_SPACES = '''
@@ -335,7 +357,7 @@ def test_spaces():
'''
-def test_any_spaces():
+def test_any_spaces(fortran_writer):
''' Test that any_space and any_discontinuous_space metadata are handled
correctly for kernel stubs.
@@ -344,39 +366,44 @@ def test_any_spaces():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
+ generated_code = fortran_writer(kernel.gen_stub)
output = (
- " MODULE dummy_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE dummy_code(nlayers, field_1_adspc1_field_1, "
+ "module dummy_mod\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ " subroutine dummy_code(nlayers, field_1_adspc1_field_1, "
"field_2_aspc7_field_2, field_3_adspc4_field_3, "
"ndf_adspc1_field_1, undf_adspc1_field_1, map_adspc1_field_1, "
"ndf_aspc7_field_2, undf_aspc7_field_2, map_aspc7_field_2, "
"ndf_adspc4_field_3, undf_adspc4_field_3, map_adspc4_field_3)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_adspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in), dimension("
- "ndf_adspc1_field_1) :: map_adspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_adspc4_field_3\n"
- " INTEGER(KIND=i_def), intent(in), dimension("
- "ndf_adspc4_field_3) :: map_adspc4_field_3\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_aspc7_field_2\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc7_field_2) :: map_aspc7_field_2\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_adspc1_field_1, "
- "undf_aspc7_field_2, undf_adspc4_field_3\n"
- " REAL(KIND=r_def), intent(in), dimension"
- "(undf_adspc1_field_1) :: field_1_adspc1_field_1\n"
- " REAL(KIND=r_def), intent(inout), dimension"
- "(undf_aspc7_field_2) :: field_2_aspc7_field_2\n"
- " REAL(KIND=r_def), intent(inout), dimension"
- "(undf_adspc4_field_3) :: field_3_adspc4_field_3\n"
- " END SUBROUTINE dummy_code\n"
- " END MODULE dummy_mod")
- assert output in generated_code
+ " use constants_mod\n"
+ " integer(kind=i_def), intent(in) :: nlayers\n"
+ " integer(kind=i_def), intent(in) :: ndf_adspc1_field_1\n"
+ " integer(kind=i_def), dimension("
+ "ndf_adspc1_field_1), intent(in) :: map_adspc1_field_1\n"
+ " integer(kind=i_def), intent(in) :: ndf_adspc4_field_3\n"
+ " integer(kind=i_def), dimension("
+ "ndf_adspc4_field_3), intent(in) :: map_adspc4_field_3\n"
+ " integer(kind=i_def), intent(in) :: ndf_aspc7_field_2\n"
+ " integer(kind=i_def), "
+ "dimension(ndf_aspc7_field_2), intent(in) :: map_aspc7_field_2\n"
+ " integer(kind=i_def), intent(in) :: undf_adspc1_field_1\n"
+ " integer(kind=i_def), intent(in) :: undf_aspc7_field_2\n"
+ " integer(kind=i_def), intent(in) :: undf_adspc4_field_3\n"
+ " real(kind=r_def), dimension"
+ "(undf_adspc1_field_1), intent(in) :: field_1_adspc1_field_1\n"
+ " real(kind=r_def), dimension"
+ "(undf_aspc7_field_2), intent(inout) :: field_2_aspc7_field_2\n"
+ " real(kind=r_def), dimension"
+ "(undf_adspc4_field_3), intent(inout) :: field_3_adspc4_field_3\n"
+ "\n"
+ "\n"
+ " end subroutine dummy_code\n"
+ "\n"
+ "end module dummy_mod\n")
+ assert output == generated_code
# Fields : vectors
@@ -397,34 +424,38 @@ def test_any_spaces():
'''
-def test_vectors():
+def test_vectors(fortran_writer):
''' test that field vectors are handled correctly for kernel stubs '''
ast = fpapi.parse(VECTORS, ignore_comments=False)
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = kernel.gen_stub
+ generated_code = fortran_writer(kernel.gen_stub)
output = (
- " MODULE dummy_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE dummy_code(nlayers, field_1_w0_v1, "
+ "module dummy_mod\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ " subroutine dummy_code(nlayers, field_1_w0_v1, "
"field_1_w0_v2, field_1_w0_v3, ndf_w0, undf_w0, map_w0)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w0\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w0) :: map_w0\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w0\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w0) :: "
+ " use constants_mod\n"
+ " integer(kind=i_def), intent(in) :: nlayers\n"
+ " integer(kind=i_def), intent(in) :: ndf_w0\n"
+ " integer(kind=i_def), dimension(ndf_w0), intent(in) :: map_w0\n"
+ " integer(kind=i_def), intent(in) :: undf_w0\n"
+ " real(kind=r_def), dimension(undf_w0), intent(inout) :: "
"field_1_w0_v1\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w0) :: "
+ " real(kind=r_def), dimension(undf_w0), intent(inout) :: "
"field_1_w0_v2\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w0) :: "
+ " real(kind=r_def), dimension(undf_w0), intent(inout) :: "
"field_1_w0_v3\n"
- " END SUBROUTINE dummy_code\n"
- " END MODULE dummy_mod")
- assert output in str(generated_code)
+ "\n"
+ "\n"
+ " end subroutine dummy_code\n"
+ "\n"
+ "end module dummy_mod\n")
+ assert output in generated_code
def test_arg_descriptor_vec_str():
@@ -444,7 +475,7 @@ def test_arg_descriptor_vec_str():
assert expected_output in result
-def test_enforce_bc_kernel_stub_gen():
+def test_enforce_bc_kernel_stub_gen(fortran_writer):
''' Test that the enforce_bc_kernel boundary layer argument modification
is handled correctly for kernel stubs.
@@ -454,31 +485,36 @@ def test_enforce_bc_kernel_stub_gen():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = kernel.gen_stub
+ generated_code = fortran_writer(kernel.gen_stub)
output = (
- " MODULE enforce_bc_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE enforce_bc_code(nlayers, field_1_aspc1_field_1, "
+ "module enforce_bc_mod\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ " subroutine enforce_bc_code(nlayers, field_1_aspc1_field_1, "
"ndf_aspc1_field_1, undf_aspc1_field_1, map_aspc1_field_1, "
"boundary_dofs_field_1)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_aspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc1_field_1) :: map_aspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_aspc1_field_1\n"
- " REAL(KIND=r_def), intent(inout), "
- "dimension(undf_aspc1_field_1) :: field_1_aspc1_field_1\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc1_field_1,2) :: boundary_dofs_field_1\n"
- " END SUBROUTINE enforce_bc_code\n"
- " END MODULE enforce_bc_mod")
- assert output in str(generated_code)
-
-
-def test_enforce_op_bc_kernel_stub_gen():
+ " use constants_mod\n"
+ " integer(kind=i_def), intent(in) :: nlayers\n"
+ " integer(kind=i_def), intent(in) :: ndf_aspc1_field_1\n"
+ " integer(kind=i_def), "
+ "dimension(ndf_aspc1_field_1), intent(in) :: map_aspc1_field_1\n"
+ " integer(kind=i_def), intent(in) :: undf_aspc1_field_1\n"
+ " real(kind=r_def), "
+ "dimension(undf_aspc1_field_1), intent(inout) :: field_1_aspc1_field_1"
+ "\n"
+ " integer(kind=i_def), "
+ "dimension(ndf_aspc1_field_1,2), intent(in) :: boundary_dofs_field_1\n"
+ "\n"
+ "\n"
+ " end subroutine enforce_bc_code\n"
+ "\n"
+ "end module enforce_bc_mod\n")
+ assert output == generated_code
+
+
+def test_enforce_op_bc_kernel_stub_gen(fortran_writer):
''' Test that the enforce_operator_bc_kernel boundary dofs argument
modification is handled correctly for kernel stubs.
@@ -489,31 +525,35 @@ def test_enforce_op_bc_kernel_stub_gen():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
+ generated_code = fortran_writer(kernel.gen_stub)
output = (
- " MODULE enforce_operator_bc_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE enforce_operator_bc_code(cell, nlayers, "
+ "module enforce_operator_bc_mod\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ " subroutine enforce_operator_bc_code(cell, nlayers, "
"op_1_ncell_3d, op_1, ndf_aspc1_op_1, ndf_aspc2_op_1, "
"boundary_dofs_op_1)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_aspc1_op_1, "
- "ndf_aspc2_op_1\n"
- " INTEGER(KIND=i_def), intent(in) :: cell\n"
- " INTEGER(KIND=i_def), intent(in) :: op_1_ncell_3d\n"
- " REAL(KIND=r_def), intent(inout), dimension("
- "op_1_ncell_3d,ndf_aspc1_op_1,ndf_aspc2_op_1) :: op_1\n"
- " INTEGER(KIND=i_def), intent(in), "
- "dimension(ndf_aspc1_op_1,2) :: boundary_dofs_op_1\n"
- " END SUBROUTINE enforce_operator_bc_code\n"
- " END MODULE enforce_operator_bc_mod")
- assert output in generated_code
-
-
-def test_multi_qr_stub_gen():
+ " use constants_mod\n"
+ " integer(kind=i_def), intent(in) :: nlayers\n"
+ " integer(kind=i_def), intent(in) :: ndf_aspc1_op_1\n"
+ " integer(kind=i_def), intent(in) :: ndf_aspc2_op_1\n"
+ " integer(kind=i_def), intent(in) :: cell\n"
+ " integer(kind=i_def), intent(in) :: op_1_ncell_3d\n"
+ " real(kind=r_def), dimension("
+ "op_1_ncell_3d,ndf_aspc1_op_1,ndf_aspc2_op_1), intent(inout) :: op_1\n"
+ " integer(kind=i_def), "
+ "dimension(ndf_aspc1_op_1,2), intent(in) :: boundary_dofs_op_1\n"
+ "\n"
+ "\n"
+ " end subroutine enforce_operator_bc_code\n"
+ "\n"
+ "end module enforce_operator_bc_mod\n")
+ assert output == generated_code
+
+
+def test_multi_qr_stub_gen(fortran_writer):
''' Test that the stub generator correctly handles a kernel requiring
more than one quadrature rule. '''
ast = fpapi.parse(os.path.join(BASE_PATH,
@@ -522,8 +562,8 @@ def test_multi_qr_stub_gen():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = str(kernel.gen_stub)
- assert ("SUBROUTINE testkern_2qr_code(nlayers, field_1_w1, field_2_w2, "
+ generated_code = fortran_writer(kernel.gen_stub)
+ assert ("subroutine testkern_2qr_code(nlayers, field_1_w1, field_2_w2, "
"field_3_w2, field_4_w3, ndf_w1, undf_w1, map_w1, "
"basis_w1_qr_face, basis_w1_qr_edge, ndf_w2, undf_w2, map_w2, "
"diff_basis_w2_qr_face, diff_basis_w2_qr_edge, ndf_w3, undf_w3, "
@@ -531,33 +571,37 @@ def test_multi_qr_stub_gen():
"diff_basis_w3_qr_face, diff_basis_w3_qr_edge, nfaces_qr_face, "
"np_xyz_qr_face, weights_xyz_qr_face, nedges_qr_edge, "
"np_xyz_qr_edge, weights_xyz_qr_edge)" in generated_code)
- assert ("INTEGER(KIND=i_def), intent(in) :: np_xyz_qr_face, "
- "nfaces_qr_face, np_xyz_qr_edge, nedges_qr_edge" in generated_code)
+ assert (" integer(kind=i_def), intent(in) :: np_xyz_qr_face\n"
+ " integer(kind=i_def), intent(in) :: nfaces_qr_face\n"
+ " integer(kind=i_def), intent(in) :: np_xyz_qr_edge\n"
+ " integer(kind=i_def), intent(in) :: nedges_qr_edge\n"
+ in generated_code)
assert (
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w1,"
- "np_xyz_qr_face,nfaces_qr_face) :: basis_w1_qr_face\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w1,"
- "np_xyz_qr_edge,nedges_qr_edge) :: basis_w1_qr_edge\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2,"
- "np_xyz_qr_face,nfaces_qr_face) :: diff_basis_w2_qr_face\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2,"
- "np_xyz_qr_edge,nedges_qr_edge) :: diff_basis_w2_qr_edge\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w3,"
- "np_xyz_qr_face,nfaces_qr_face) :: basis_w3_qr_face\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w3,"
- "np_xyz_qr_face,nfaces_qr_face) :: diff_basis_w3_qr_face\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w3,"
- "np_xyz_qr_edge,nedges_qr_edge) :: basis_w3_qr_edge\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w3,"
- "np_xyz_qr_edge,nedges_qr_edge) :: diff_basis_w3_qr_edge"
+ " real(kind=r_def), dimension(3,ndf_w1,"
+ "np_xyz_qr_face,nfaces_qr_face), intent(in) :: basis_w1_qr_face\n"
+ " real(kind=r_def), dimension(3,ndf_w1,"
+ "np_xyz_qr_edge,nedges_qr_edge), intent(in) :: basis_w1_qr_edge\n"
+ " real(kind=r_def), dimension(1,ndf_w2,"
+ "np_xyz_qr_face,nfaces_qr_face), intent(in) :: diff_basis_w2_qr_face\n"
+ " real(kind=r_def), dimension(1,ndf_w2,"
+ "np_xyz_qr_edge,nedges_qr_edge), intent(in) :: diff_basis_w2_qr_edge\n"
+ " real(kind=r_def), dimension(1,ndf_w3,"
+ "np_xyz_qr_face,nfaces_qr_face), intent(in) :: basis_w3_qr_face\n"
+ " real(kind=r_def), dimension(3,ndf_w3,"
+ "np_xyz_qr_face,nfaces_qr_face), intent(in) :: diff_basis_w3_qr_face\n"
+ " real(kind=r_def), dimension(1,ndf_w3,"
+ "np_xyz_qr_edge,nedges_qr_edge), intent(in) :: basis_w3_qr_edge\n"
+ " real(kind=r_def), dimension(3,ndf_w3,"
+ "np_xyz_qr_edge,nedges_qr_edge), intent(in) :: diff_basis_w3_qr_edge"
in generated_code)
- assert (" REAL(KIND=r_def), intent(in), dimension(np_xyz_qr_face,"
- "nfaces_qr_face) :: weights_xyz_qr_face\n"
- " REAL(KIND=r_def), intent(in), dimension(np_xyz_qr_edge,"
- "nedges_qr_edge) :: weights_xyz_qr_edge\n" in generated_code)
+ assert (" real(kind=r_def), dimension(np_xyz_qr_face,"
+ "nfaces_qr_face), intent(in) :: weights_xyz_qr_face\n"
+ " real(kind=r_def), dimension(np_xyz_qr_edge,"
+ "nedges_qr_edge), intent(in) :: weights_xyz_qr_edge\n"
+ in generated_code)
-def test_qr_plus_eval_stub_gen():
+def test_qr_plus_eval_stub_gen(fortran_writer):
''' Test the stub generator for a kernel that requires both an evaluator
and quadrature. '''
ast = fpapi.parse(os.path.join(BASE_PATH,
@@ -566,36 +610,37 @@ def test_qr_plus_eval_stub_gen():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- gen_code = str(kernel.gen_stub)
+ code = fortran_writer(kernel.gen_stub)
assert (
- "SUBROUTINE testkern_qr_eval_code(nlayers, field_1_w1, field_2_w2,"
+ "subroutine testkern_qr_eval_code(nlayers, field_1_w1, field_2_w2,"
" field_3_w2, field_4_w3, ndf_w1, undf_w1, map_w1, basis_w1_qr_face, "
"basis_w1_on_w1, ndf_w2, undf_w2, map_w2, diff_basis_w2_qr_face, "
"diff_basis_w2_on_w1, ndf_w3, undf_w3, map_w3, basis_w3_qr_face, "
"basis_w3_on_w1, diff_basis_w3_qr_face, diff_basis_w3_on_w1, "
"nfaces_qr_face, np_xyz_qr_face, weights_xyz_qr_face)"
- in gen_code)
- assert ("INTEGER(KIND=i_def), intent(in) :: np_xyz_qr_face, nfaces_qr_face"
- in gen_code)
+ in code)
+ assert (" integer(kind=i_def), intent(in) :: np_xyz_qr_face\n"
+ " integer(kind=i_def), intent(in) :: nfaces_qr_face\n"
+ in code)
assert (
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w1,np_xyz_qr_face"
- ",nfaces_qr_face) :: basis_w1_qr_face\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w1,ndf_w1) :: "
+ " real(kind=r_def), dimension(3,ndf_w1,np_xyz_qr_face"
+ ",nfaces_qr_face), intent(in) :: basis_w1_qr_face\n"
+ " real(kind=r_def), dimension(3,ndf_w1,ndf_w1), intent(in) :: "
"basis_w1_on_w1\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2,np_xyz_qr_face"
- ",nfaces_qr_face) :: diff_basis_w2_qr_face\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w2,ndf_w1) :: "
+ " real(kind=r_def), dimension(1,ndf_w2,np_xyz_qr_face"
+ ",nfaces_qr_face), intent(in) :: diff_basis_w2_qr_face\n"
+ " real(kind=r_def), dimension(1,ndf_w2,ndf_w1), intent(in) :: "
"diff_basis_w2_on_w1\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w3,np_xyz_qr_face"
- ",nfaces_qr_face) :: basis_w3_qr_face\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w3,np_xyz_qr_face"
- ",nfaces_qr_face) :: diff_basis_w3_qr_face\n"
- " REAL(KIND=r_def), intent(in), dimension(1,ndf_w3,ndf_w1) :: "
+ " real(kind=r_def), dimension(1,ndf_w3,np_xyz_qr_face"
+ ",nfaces_qr_face), intent(in) :: basis_w3_qr_face\n"
+ " real(kind=r_def), dimension(3,ndf_w3,np_xyz_qr_face"
+ ",nfaces_qr_face), intent(in) :: diff_basis_w3_qr_face\n"
+ " real(kind=r_def), dimension(1,ndf_w3,ndf_w1), intent(in) :: "
"basis_w3_on_w1\n"
- " REAL(KIND=r_def), intent(in), dimension(3,ndf_w3,ndf_w1) :: "
+ " real(kind=r_def), dimension(3,ndf_w3,ndf_w1), intent(in) :: "
"diff_basis_w3_on_w1\n"
- " REAL(KIND=r_def), intent(in), dimension(np_xyz_qr_face,"
- "nfaces_qr_face) :: weights_xyz_qr_face" in gen_code)
+ " real(kind=r_def), dimension(np_xyz_qr_face,"
+ "nfaces_qr_face), intent(in) :: weights_xyz_qr_face" in code)
SUB_NAME = '''
@@ -615,7 +660,7 @@ def test_qr_plus_eval_stub_gen():
'''
-def test_sub_name():
+def test_sub_name(fortran_writer):
''' test for expected behaviour when the kernel subroutine does
not conform to the convention of having "_code" at the end of its
name. In this case we append "_code to the name and _mod to the
@@ -624,21 +669,25 @@ def test_sub_name():
metadata = LFRicKernMetadata(ast)
kernel = LFRicKern()
kernel.load_meta(metadata)
- generated_code = kernel.gen_stub
+ generated_code = fortran_writer(kernel.gen_stub)
output = (
- " MODULE dummy_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE dummy_code(nlayers, field_1_w1, "
+ "module dummy_mod\n"
+ " implicit none\n"
+ " public\n"
+ "\n"
+ " contains\n"
+ " subroutine dummy_code(nlayers, field_1_w1, "
"ndf_w1, undf_w1, map_w1)\n"
- " USE constants_mod\n"
- " IMPLICIT NONE\n"
- " INTEGER(KIND=i_def), intent(in) :: nlayers\n"
- " INTEGER(KIND=i_def), intent(in) :: ndf_w1\n"
- " INTEGER(KIND=i_def), intent(in), dimension(ndf_w1) :: map_w1\n"
- " INTEGER(KIND=i_def), intent(in) :: undf_w1\n"
- " REAL(KIND=r_def), intent(inout), dimension(undf_w1) :: "
+ " use constants_mod\n"
+ " integer(kind=i_def), intent(in) :: nlayers\n"
+ " integer(kind=i_def), intent(in) :: ndf_w1\n"
+ " integer(kind=i_def), dimension(ndf_w1), intent(in) :: map_w1\n"
+ " integer(kind=i_def), intent(in) :: undf_w1\n"
+ " real(kind=r_def), dimension(undf_w1), intent(inout) :: "
"field_1_w1\n"
- " END SUBROUTINE dummy_code\n"
- " END MODULE dummy_mod")
- assert output in str(generated_code)
+ "\n"
+ "\n"
+ " end subroutine dummy_code\n"
+ "\n"
+ "end module dummy_mod\n")
+ assert output == generated_code
diff --git a/src/psyclone/tests/dynamo0p3_test.py b/src/psyclone/tests/dynamo0p3_test.py
index 8ef6779c31..705e50b3cb 100644
--- a/src/psyclone/tests/dynamo0p3_test.py
+++ b/src/psyclone/tests/dynamo0p3_test.py
@@ -56,13 +56,12 @@
DynKernelArgument, DynKernelArguments, DynProxies, HaloReadAccess,
KernCallArgList)
from psyclone.errors import FieldNotFoundError, GenerationError, InternalError
-from psyclone.f2pygen import ModuleGen
from psyclone.gen_kernel_stub import generate
from psyclone.parse.algorithm import Arg, parse
from psyclone.parse.utils import ParseError
from psyclone.psyGen import PSyFactory, InvokeSchedule, HaloExchange, BuiltIn
from psyclone.psyir.nodes import (colored, BinaryOperation, UnaryOperation,
- Reference, Routine, Container)
+ Reference, Routine, Container, Schedule)
from psyclone.psyir.symbols import (ArrayType, ScalarType, DataTypeSymbol,
UnsupportedFortranType)
from psyclone.tests.lfric_build import LFRicBuild
@@ -320,7 +319,7 @@ def test_kernel_call_invalid_iteration_space():
# set iterates_over to something unsupported
kernel._iterates_over = "vampires"
with pytest.raises(GenerationError) as excinfo:
- _ = kernel.validate_global_constraints()
+ kernel.validate_global_constraints()
assert ("The LFRic API supports calls to user-supplied kernels that "
"operate on one of ['cell_column', 'domain', 'dof', "
"'halo_cell_column', 'owned_and_halo_cell_column'], but "
@@ -337,32 +336,30 @@ def test_any_space_1(tmpdir):
_, invoke_info = parse(os.path.join(BASE_PATH, "11_any_space.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
- generated_code = str(psy.gen)
+ code = str(psy.gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert ("INTEGER(KIND=i_def), pointer :: map_aspc1_a(:,:) => null(), "
- "map_aspc2_b(:,:) => null(), map_w0(:,:) => null()\n"
- in generated_code)
- assert ("REAL(KIND=r_def), allocatable :: basis_aspc1_a_qr(:,:,:,:),"
- " basis_aspc2_b_qr(:,:,:,:)" in generated_code)
- assert ("ALLOCATE (basis_aspc1_a_qr(dim_aspc1_a, ndf_aspc1_a, "
- "np_xy_qr, np_z_qr))" in generated_code)
- assert ("ALLOCATE (basis_aspc2_b_qr(dim_aspc2_b, ndf_aspc2_b, "
- "np_xy_qr, np_z_qr))" in generated_code)
- assert ("map_aspc1_a => a_proxy%vspace%get_whole_dofmap()" in
- generated_code)
- assert ("map_aspc2_b => b_proxy%vspace%get_whole_dofmap()" in
- generated_code)
- assert ("CALL testkern_any_space_1_code(nlayers_a, a_data, rdt, "
+ assert "integer(kind=i_def), pointer :: map_aspc1_a(:,:) => null()" in code
+ assert "integer(kind=i_def), pointer :: map_aspc2_b(:,:) => null()" in code
+ assert "integer(kind=i_def), pointer :: map_w0(:,:) => null()" in code
+ assert "real(kind=r_def), allocatable :: basis_aspc1_a_qr(:,:,:,:)" in code
+ assert "real(kind=r_def), allocatable :: basis_aspc2_b_qr(:,:,:,:)" in code
+ assert ("ALLOCATE(basis_aspc1_a_qr(dim_aspc1_a,ndf_aspc1_a,"
+ "np_xy_qr,np_z_qr))" in code)
+ assert ("ALLOCATE(basis_aspc2_b_qr(dim_aspc2_b,ndf_aspc2_b,"
+ "np_xy_qr,np_z_qr))" in code)
+ assert "map_aspc1_a => a_proxy%vspace%get_whole_dofmap()" in code
+ assert "map_aspc2_b => b_proxy%vspace%get_whole_dofmap()" in code
+ assert ("call testkern_any_space_1_code(nlayers_a, a_data, rdt, "
"b_data, c_1_data, c_2_data, c_3_data, "
"ndf_aspc1_a, undf_aspc1_a, map_aspc1_a(:,cell), "
"basis_aspc1_a_qr, ndf_aspc2_b, undf_aspc2_b, "
"map_aspc2_b(:,cell), basis_aspc2_b_qr, ndf_w0, undf_w0, "
"map_w0(:,cell), diff_basis_w0_qr, np_xy_qr, np_z_qr, "
- "weights_xy_qr, weights_z_qr)" in generated_code)
- assert ("DEALLOCATE (basis_aspc1_a_qr, basis_aspc2_b_qr, diff_basis_w0_qr)"
- in generated_code)
+ "weights_xy_qr, weights_z_qr)" in code)
+ assert ("DEALLOCATE(basis_aspc1_a_qr, basis_aspc2_b_qr, diff_basis_w0_qr)"
+ in code)
def test_any_space_2(tmpdir):
@@ -378,15 +375,16 @@ def test_any_space_2(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert "INTEGER(KIND=i_def), intent(in) :: istp" in generated_code
- assert ("INTEGER(KIND=i_def), pointer :: map_aspc1_a(:,:) => null()"
+ assert "integer(kind=i_def), intent(in) :: istp" in generated_code
+ assert ("integer(kind=i_def), pointer :: map_aspc1_a(:,:) => null()"
in generated_code)
- assert "INTEGER(KIND=i_def) ndf_aspc1_a, undf_aspc1_a" in generated_code
+ assert "integer(kind=i_def) :: ndf_aspc1_a" in generated_code
+ assert "integer(kind=i_def) :: undf_aspc1_a" in generated_code
assert "ndf_aspc1_a = a_proxy%vspace%get_ndf()" in generated_code
assert "undf_aspc1_a = a_proxy%vspace%get_undf()" in generated_code
assert ("map_aspc1_a => a_proxy%vspace%get_whole_dofmap()"
in generated_code)
- assert ("CALL testkern_any_space_2_code(cell, nlayers_a, a_data, "
+ assert ("call testkern_any_space_2_code(cell, nlayers_a, a_data, "
"b_data, c_proxy%ncell_3d, c_local_stencil, istp, "
"ndf_aspc1_a, undf_aspc1_a, map_aspc1_a(:,cell))"
in generated_code)
@@ -429,10 +427,10 @@ def test_op_any_space_different_space_2(tmpdir):
assert "dim_aspc4_d = d_proxy%fs_from%get_dim_space()" in generated_code
assert "ndf_aspc5_a = a_proxy%vspace%get_ndf()" in generated_code
assert "undf_aspc5_a = a_proxy%vspace%get_undf()" in generated_code
- assert "CALL qr%compute_function(BASIS, b_proxy%fs_to, " in generated_code
- assert ("CALL qr%compute_function(BASIS, d_proxy%fs_from, " in
+ assert "call qr%compute_function(BASIS, b_proxy%fs_to, " in generated_code
+ assert ("call qr%compute_function(BASIS, d_proxy%fs_from, " in
generated_code)
- assert ("CALL qr%compute_function(DIFF_BASIS, d_proxy%fs_from, " in
+ assert ("call qr%compute_function(DIFF_BASIS, d_proxy%fs_from, " in
generated_code)
assert "map_aspc5_a => a_proxy%vspace%get_whole_dofmap()" in generated_code
assert "map_aspc4_d => f_proxy%vspace%get_whole_dofmap()" in generated_code
@@ -453,18 +451,18 @@ def test_op_any_discontinuous_space_1(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert "REAL(KIND=r_def), intent(in) :: rdt" in generated_code
- assert ("INTEGER(KIND=i_def), pointer :: map_adspc1_f1(:,:) => null()"
- in generated_code)
- assert ("INTEGER(KIND=i_def) ndf_adspc1_f1, undf_adspc1_f1"
+ assert "real(kind=r_def), intent(in) :: rdt" in generated_code
+ assert ("integer(kind=i_def), pointer :: map_adspc1_f1(:,:) => null()"
in generated_code)
+ assert "integer(kind=i_def) :: ndf_adspc1_f1" in generated_code
+ assert "integer(kind=i_def) :: undf_adspc1_f1" in generated_code
assert "ndf_adspc1_f1 = f1_proxy(1)%vspace%get_ndf()" in generated_code
assert "undf_adspc1_f1 = f1_proxy(1)%vspace%get_undf()" in generated_code
assert ("map_adspc1_f1 => f1_proxy(1)%vspace%get_whole_dofmap()"
in generated_code)
assert "ndf_adspc3_op4 = op4_proxy%fs_to%get_ndf()" in generated_code
assert "ndf_adspc7_op4 = op4_proxy%fs_from%get_ndf()" in generated_code
- assert ("CALL testkern_any_discontinuous_space_op_1_code(cell, "
+ assert ("call testkern_any_discontinuous_space_op_1_code(cell, "
"nlayers_f1, f1_1_data, f1_2_data, f1_3_data, "
"f2_data, op3_proxy%ncell_3d, op3_local_stencil, "
"op4_proxy%ncell_3d, op4_local_stencil, rdt, "
@@ -496,13 +494,13 @@ def test_op_any_discontinuous_space_2(tmpdir):
assert "dim_adspc4_f1 = f1_proxy%vspace%get_dim_space()" in generated_code
assert ("diff_dim_adspc4_f1 = f1_proxy%vspace%get_dim_space_diff()"
in generated_code)
- assert ("ALLOCATE (basis_adspc1_op1_qr(dim_adspc1_op1, ndf_adspc1_op1"
+ assert ("ALLOCATE(basis_adspc1_op1_qr(dim_adspc1_op1,ndf_adspc1_op1"
in generated_code)
- assert ("ALLOCATE (diff_basis_adspc4_f1_qr(diff_dim_adspc4_f1, "
+ assert ("ALLOCATE(diff_basis_adspc4_f1_qr(diff_dim_adspc4_f1,"
"ndf_adspc4_f1" in generated_code)
- assert ("CALL qr%compute_function(BASIS, op1_proxy%fs_to, dim_adspc1_op1, "
+ assert ("call qr%compute_function(BASIS, op1_proxy%fs_to, dim_adspc1_op1, "
"ndf_adspc1_op1, basis_adspc1_op1_qr)" in generated_code)
- assert ("CALL qr%compute_function(DIFF_BASIS, f1_proxy%vspace, "
+ assert ("call qr%compute_function(DIFF_BASIS, f1_proxy%vspace, "
"diff_dim_adspc4_f1, ndf_adspc4_f1, diff_basis_adspc4_f1_qr)"
in generated_code)
@@ -714,27 +712,27 @@ def test_kernel_specific(tmpdir):
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
- output0 = "USE enforce_bc_kernel_mod, ONLY: enforce_bc_code"
+ output0 = "use enforce_bc_kernel_mod, only : enforce_bc_code"
assert output0 not in generated_code
- output1 = "USE function_space_mod, ONLY: w1, w2, w2h, w2v\n"
+ output1 = "use function_space_mod, only : w1, w2, w2h, w2v\n"
assert output1 not in generated_code
- output2 = "INTEGER(KIND=i_def) fs"
+ output2 = "integer(kind=i_def) fs"
assert output2 not in generated_code
- output3 = "INTEGER(KIND=i_def), pointer :: boundary_dofs(:,:) => null()"
+ output3 = "integer(kind=i_def), pointer :: boundary_dofs(:,:) => null()"
assert output3 not in generated_code
output4 = "fs = f1%which_function_space()"
assert output4 not in generated_code
# We only call enforce_bc if the field is on a vector space
output5 = (
- "IF (fs == w1 .or. fs == w2 .or. fs == w2h .or. fs == w2v .or. "
- "fs == any_w2) THEN\n"
+ "if (fs == w1 .or. fs == w2 .or. fs == w2h .or. fs == w2v .or. "
+ "fs == any_w2) then\n"
" boundary_dofs => f1_proxy%vspace%get_boundary_dofs()\n"
- " END IF")
+ " end if")
assert output5 not in generated_code
output6 = (
- "IF (fs == w1 .or. fs == w2 .or. fs == w2h .or. fs == w2v .or. "
- "fs == any_w2) THEN\n"
- " CALL enforce_bc_code(nlayers, f1_proxy%data, "
+ "if (fs == w1 .or. fs == w2 .or. fs == w2h .or. fs == w2v .or. "
+ "fs == any_w2) then\n"
+ " call enforce_bc_code(nlayers, f1_proxy%data, "
"ndf_anyspc1_f1, undf_anyspc1_f1, map_anyspc1_f1(:,cell), "
"boundary_dofs)")
assert output6 not in generated_code
@@ -756,51 +754,51 @@ def test_multi_kernel_specific(tmpdir):
generated_code = str(psy.gen)
# Output must not contain any bc-related code
- output0 = "USE enforce_bc_kernel_mod, ONLY: enforce_bc_code"
+ output0 = "use enforce_bc_kernel_mod, only : enforce_bc_code"
assert generated_code.count(output0) == 0
- output1 = "USE function_space_mod, ONLY: w1, w2, w2h, w2v, any_w2\n"
+ output1 = "use function_space_mod, only : w1, w2, w2h, w2v, any_w2\n"
assert generated_code.count(output1) == 0
# first loop
- output1 = "INTEGER(KIND=i_def) fs\n"
+ output1 = "integer(kind=i_def) fs\n"
assert output1 not in generated_code
- output2 = "INTEGER(KIND=i_def), pointer :: boundary_dofs(:,:) => null()"
+ output2 = "integer(kind=i_def), pointer :: boundary_dofs(:,:) => null()"
assert output2 not in generated_code
output3 = "fs = f1%which_function_space()"
assert output3 not in generated_code
# We only call enforce_bc if the field is on a vector space
output4 = (
- "IF (fs == w1 .or. fs == w2 .or. fs == w2h .or. fs == w2v .or. "
- "fs == any_w2) THEN\n"
+ "if (fs == w1 .or. fs == w2 .or. fs == w2h .or. fs == w2v .or. "
+ "fs == any_w2) then\n"
" boundary_dofs => f1_proxy%vspace%get_boundary_dofs()\n"
- " END IF")
+ " end if")
assert output4 not in generated_code
output5 = (
- "IF (fs == w1 .or. fs == w2 .or. fs == w2h .or. fs == w2v .or. "
- "fs == any_w2) THEN\n"
- " CALL enforce_bc_code(nlayers, f1_proxy%data, "
+ "if (fs == w1 .or. fs == w2 .or. fs == w2h .or. fs == w2v .or. "
+ "fs == any_w2) then\n"
+ " call enforce_bc_code(nlayers, f1_proxy%data, "
"ndf_anyspc1_f1, undf_anyspc1_f1, map_anyspc1_f1(:,cell), "
"boundary_dofs)")
assert output5 not in generated_code
# second loop
- output6 = "INTEGER(KIND=i_def) fs_1\n"
+ output6 = "integer(kind=i_def) fs_1\n"
assert output6 not in generated_code
- output7 = "INTEGER(KIND=i_def), pointer :: boundary_dofs_1(:,:) => null()"
+ output7 = "integer(kind=i_def), pointer :: boundary_dofs_1(:,:) => null()"
assert output7 not in generated_code
output8 = "fs_1 = f1%which_function_space()"
assert output8 not in generated_code
output9 = (
- "IF (fs_1 == w1 .or. fs_1 == w2 .or. fs_1 == w2h .or. fs_1 == w2v "
+ "if (fs_1 == w1 .or. fs_1 == w2 .or. fs_1 == w2h .or. fs_1 == w2v "
".or. fs_1 == any_w2) "
- "THEN\n"
+ "then\n"
" boundary_dofs_1 => f1_proxy%vspace%get_boundary_dofs()\n"
- " END IF")
+ " end if")
assert output9 not in generated_code
output10 = (
- "IF (fs_1 == w1 .or. fs_1 == w2 .or. fs_1 == w2h .or. fs_1 == w2v "
- ".or. fs_1 == any_w2) THEN\n"
- " CALL enforce_bc_code(nlayers, f1_proxy%data, "
+ "if (fs_1 == w1 .or. fs_1 == w2 .or. fs_1 == w2h .or. fs_1 == w2v "
+ ".or. fs_1 == any_w2) then\n"
+ " call enforce_bc_code(nlayers, f1_proxy%data, "
"ndf_anyspc1_f1, undf_anyspc1_f1, map_anyspc1_f1(:,cell), "
"boundary_dofs_1)")
assert output10 not in generated_code
@@ -822,13 +820,13 @@ def test_field_bc_kernel(tmpdir):
"12.2_enforce_bc_kernel.f90"),
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
- gen_code = str(psy.gen)
- assert ("INTEGER(KIND=i_def), pointer :: boundary_dofs_a(:,:) => "
- "null()" in gen_code)
- assert "boundary_dofs_a => a_proxy%vspace%get_boundary_dofs()" in gen_code
- assert ("CALL enforce_bc_code(nlayers_a, a_data, ndf_aspc1_a, "
+ code = str(psy.gen)
+ assert ("integer(kind=i_def), pointer :: boundary_dofs_a(:,:) => "
+ "null()" in code)
+ assert "boundary_dofs_a => a_proxy%vspace%get_boundary_dofs()" in code
+ assert ("call enforce_bc_code(nlayers_a, a_data, ndf_aspc1_a, "
"undf_aspc1_a, map_aspc1_a(:,cell), boundary_dofs_a)"
- in gen_code)
+ in code)
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -867,12 +865,12 @@ def test_bc_kernel_field_only(monkeypatch, annexed, dist_mem):
# function which we create using lambda.
monkeypatch.setattr(arg, "ref_name",
lambda function_space=None: "vspace")
- with pytest.raises(VisitorError) as excinfo:
+ with pytest.raises(VisitorError) as err:
_ = psy.gen
const = LFRicConstants()
assert (f"Expected an argument of {const.VALID_FIELD_NAMES} type "
f"from which to look-up boundary dofs for kernel "
- "enforce_bc_code but got 'gh_operator'" in str(excinfo.value))
+ "enforce_bc_code but got 'gh_operator'" in str(err.value))
def test_bc_kernel_anyspace1_only():
@@ -934,7 +932,7 @@ def test_multikernel_invoke_1(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
# Check that argument names are not replicated
- assert "SUBROUTINE invoke_0(a, f1, f2, m1, m2)" in generated_code
+ assert "subroutine invoke_0(a, f1, f2, m1, m2)" in generated_code
# Check that only one proxy initialisation is produced
assert "f1_proxy = f1%get_proxy()" in generated_code
# Check that we only initialise dofmaps once
@@ -953,7 +951,7 @@ def test_multikernel_invoke_qr(tmpdir):
generated_code = psy.gen
# simple check that two kernel calls exist
- assert str(generated_code).count("CALL testkern_qr_code") == 2
+ assert str(generated_code).count("call testkern_qr_code") == 2
def test_multikern_invoke_oper():
@@ -965,9 +963,9 @@ def test_multikern_invoke_oper():
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
# 1st test for duplication of name vector-field declaration
- assert "TYPE(field_type), intent(in) :: f1(3), f1(3)" not in generated_code
+ assert "type(field_type), intent(in) :: f1(3), f1(3)" not in generated_code
# 2nd test for duplication of name vector-field declaration
- assert "TYPE(field_proxy_type) f1_proxy(3), f1_proxy(3)" not in \
+ assert "type(field_proxy_type) f1_proxy(3), f1_proxy(3)" not in \
generated_code
@@ -984,17 +982,17 @@ def test_2kern_invoke_any_space(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert ("INTEGER(KIND=i_def), pointer :: map_aspc1_f1(:,:) => null(), "
- "map_aspc1_f2(:,:) => null()\n" in gen)
+ assert "integer(kind=i_def), pointer :: map_aspc1_f1(:,:) => null()" in gen
+ assert "integer(kind=i_def), pointer :: map_aspc1_f2(:,:) => null()" in gen
assert "map_aspc1_f1 => f1_proxy%vspace%get_whole_dofmap()\n" in gen
assert "map_aspc1_f2 => f2_proxy%vspace%get_whole_dofmap()\n" in gen
assert (
- " CALL testkern_any_space_2_code(cell, nlayers_f1, f1_data,"
+ " call testkern_any_space_2_code(cell, nlayers_f1, f1_data,"
" f2_data, op_proxy%ncell_3d, op_local_stencil, scalar, "
"ndf_aspc1_f1, undf_aspc1_f1, map_aspc1_f1(:,cell))\n" in gen)
assert "map_aspc1_f2 => f2_proxy%vspace%get_whole_dofmap()\n" in gen
assert (
- " CALL testkern_any_space_2_code(cell, nlayers_f2, f2_data,"
+ " call testkern_any_space_2_code(cell, nlayers_f2, f2_data,"
" f1_data, op_proxy%ncell_3d, op_local_stencil, scalar, "
"ndf_aspc1_f2, undf_aspc1_f2, map_aspc1_f2(:,cell))\n" in gen)
@@ -1012,27 +1010,34 @@ def test_multikern_invoke_any_space(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert ("INTEGER(KIND=i_def), pointer :: map_aspc1_f1(:,:) => null(), "
- "map_aspc1_f2(:,:) => null(), map_aspc2_f1(:,:) => null(), "
- "map_aspc2_f2(:,:) => null(), map_w0(:,:) => null()" in gen)
+ assert "integer(kind=i_def), pointer :: map_aspc1_f1(:,:) => null()" in gen
+ assert "integer(kind=i_def), pointer :: map_aspc1_f2(:,:) => null()" in gen
+ assert "integer(kind=i_def), pointer :: map_aspc2_f1(:,:) => null()" in gen
+ assert "integer(kind=i_def), pointer :: map_w0(:,:) => null()" in gen
+ assert (
+ "real(kind=r_def), allocatable :: basis_aspc1_f1_qr(:,:,:,:)") in gen
+ assert (
+ "real(kind=r_def), allocatable :: basis_aspc1_f2_qr(:,:,:,:)") in gen
+ assert (
+ "real(kind=r_def), allocatable :: basis_aspc2_f1_qr(:,:,:,:)") in gen
+ assert (
+ "real(kind=r_def), allocatable :: basis_aspc2_f2_qr(:,:,:,:)") in gen
assert (
- "REAL(KIND=r_def), allocatable :: basis_aspc1_f1_qr(:,:,:,:), "
- "basis_aspc2_f2_qr(:,:,:,:), diff_basis_w0_qr(:,:,:,:), "
- "basis_aspc1_f2_qr(:,:,:,:), basis_aspc2_f1_qr(:,:,:,:)" in gen)
+ "real(kind=r_def), allocatable :: diff_basis_w0_qr(:,:,:,:)") in gen
assert "ndf_aspc1_f1 = f1_proxy%vspace%get_ndf()" in gen
assert "ndf_aspc2_f2 = f2_proxy%vspace%get_ndf()" in gen
assert "ndf_w0 = f3_proxy(1)%vspace%get_ndf()" in gen
assert "ndf_aspc1_f2 = f2_proxy%vspace%get_ndf()" in gen
- assert ("CALL qr%compute_function(BASIS, f2_proxy%vspace, "
+ assert ("call qr%compute_function(BASIS, f2_proxy%vspace, "
"dim_aspc1_f2, ndf_aspc1_f2, basis_aspc1_f2_qr)" in gen)
assert (
- " map_aspc1_f1 => f1_proxy%vspace%get_whole_dofmap()\n"
- " map_aspc2_f2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_w0 => f3_proxy(1)%vspace%get_whole_dofmap()\n"
- " map_aspc1_f2 => f2_proxy%vspace%get_whole_dofmap()\n"
- " map_aspc2_f1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_aspc1_f1 => f1_proxy%vspace%get_whole_dofmap()\n"
+ " map_aspc2_f2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_w0 => f3_proxy(1)%vspace%get_whole_dofmap()\n"
+ " map_aspc1_f2 => f2_proxy%vspace%get_whole_dofmap()\n"
+ " map_aspc2_f1 => f1_proxy%vspace%get_whole_dofmap()\n"
in gen)
- assert ("CALL testkern_any_space_1_code(nlayers_f1, f1_data, rdt, "
+ assert ("call testkern_any_space_1_code(nlayers_f1, f1_data, rdt, "
"f2_data, f3_1_data, f3_2_data, "
"f3_3_data, ndf_aspc1_f1, undf_aspc1_f1, "
"map_aspc1_f1(:,cell), basis_aspc1_f1_qr, ndf_aspc2_f2, "
@@ -1055,10 +1060,10 @@ def test_mkern_invoke_multiple_any_spaces(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
assert "ndf_aspc1_f1 = f1_proxy%vspace%get_ndf()" in gen
- assert ("CALL qr%compute_function(BASIS, f1_proxy%vspace, "
+ assert ("call qr%compute_function(BASIS, f1_proxy%vspace, "
"dim_aspc1_f1, ndf_aspc1_f1, basis_aspc1_f1_qr)" in gen)
assert "ndf_aspc2_f2 = f2_proxy%vspace%get_ndf()" in gen
- assert ("CALL qr%compute_function(BASIS, f2_proxy%vspace, "
+ assert ("call qr%compute_function(BASIS, f2_proxy%vspace, "
"dim_aspc2_f2, ndf_aspc2_f2, basis_aspc2_f2_qr)" in gen)
assert "ndf_aspc1_f2 = f2_proxy%vspace%get_ndf()" in gen
assert "ndf_aspc1_op = op_proxy%fs_to%get_ndf()" in gen
@@ -1071,19 +1076,19 @@ def test_mkern_invoke_multiple_any_spaces(tmpdir):
# testkern_any_space_1_type requires GH_BASIS on ANY_SPACE_1 and 2 and
# DIFF_BASIS on w0
# f1 is on ANY_SPACE_1 and f2 is on ANY_SPACE_2. f3 is on W0.
- assert ("CALL qr%compute_function(BASIS, f1_proxy%vspace, "
+ assert ("call qr%compute_function(BASIS, f1_proxy%vspace, "
"dim_aspc1_f1, ndf_aspc1_f1, basis_aspc1_f1_qr)" in gen)
- assert ("CALL qr%compute_function(BASIS, f2_proxy%vspace, "
+ assert ("call qr%compute_function(BASIS, f2_proxy%vspace, "
"dim_aspc2_f2, ndf_aspc2_f2, basis_aspc2_f2_qr)" in gen)
# testkern_any_space_4_type needs GH_BASIS on ANY_SPACE_1 which is the
# to-space of op2
- assert ("CALL qr%compute_function(BASIS, op2_proxy%fs_to, "
+ assert ("call qr%compute_function(BASIS, op2_proxy%fs_to, "
"dim_aspc1_op2, ndf_aspc1_op2, basis_aspc1_op2_qr)" in gen)
# Need GH_BASIS and DIFF_BASIS on ANY_SPACE_4 which is to/from-space
# of op4
- assert ("CALL qr%compute_function(BASIS, op4_proxy%fs_from, "
+ assert ("call qr%compute_function(BASIS, op4_proxy%fs_from, "
"dim_aspc4_op4, ndf_aspc4_op4, basis_aspc4_op4_qr)" in gen)
- assert ("CALL qr%compute_function(DIFF_BASIS, op4_proxy%fs_from, "
+ assert ("call qr%compute_function(DIFF_BASIS, op4_proxy%fs_from, "
"diff_dim_aspc4_op4, ndf_aspc4_op4, diff_basis_aspc4_op4_qr)"
in gen)
@@ -1107,7 +1112,7 @@ def test_loopfuse(dist_mem, tmpdir):
trans.apply(loop1, loop2)
generated_code = psy.gen
# only one loop
- assert str(generated_code).count("DO cell") == 1
+ assert str(generated_code).count("do cell") == 1
# only one map for each space
assert str(generated_code).count("map_w1 =>") == 1
assert str(generated_code).count("map_w2 =>") == 1
@@ -1115,11 +1120,11 @@ def test_loopfuse(dist_mem, tmpdir):
# kernel call tests
kern_idxs = []
for idx, line in enumerate(str(generated_code).split('\n')):
- if "DO cell" in line:
+ if "do cell" in line:
do_idx = idx
- if "CALL testkern_code(" in line:
+ if "call testkern_code(" in line:
kern_idxs.append(idx)
- if "END DO" in line:
+ if "enddo" in line:
enddo_idx = idx
# two kernel calls
assert len(kern_idxs) == 2
@@ -1138,12 +1143,12 @@ def test_named_psy_routine(dist_mem, tmpdir):
api=TEST_API)
psy = PSyFactory(TEST_API,
distributed_memory=dist_mem).create(invoke_info)
- gen_code = str(psy.gen)
+ code = str(psy.gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
# Name should be all lower-case and with spaces replaced by underscores
- assert "SUBROUTINE invoke_important_invoke" in gen_code
+ assert "subroutine invoke_important_invoke" in code
# Tests for LFRic stub generator
@@ -1507,6 +1512,7 @@ def test_dynkernelargument_psyir_expression(monkeypatch):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
first_invoke = psy.invokes.invoke_list[0]
kern = first_invoke.schedule.walk(LFRicKern)[0]
+ first_invoke.setup_psy_layer_symbols()
psyir = kern.arguments.args[1].psyir_expression()
assert isinstance(psyir, Reference)
assert psyir.symbol.name == "cma_op1_cma_matrix"
@@ -2379,11 +2385,11 @@ def test_halo_dirty_1():
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
expected = (
- " END DO\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the above loop\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n")
+ " enddo\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the above loop(s)"
+ "\n"
+ " call f1_proxy%set_dirty()\n")
assert expected in generated_code
@@ -2395,19 +2401,19 @@ def test_halo_dirty_2(tmpdir):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
expected = (
- " END DO\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the above loop\n"
- " !\n"
- " CALL f1_proxy%set_dirty()\n"
- " CALL f1_proxy%set_clean(1)\n"
- " CALL f3_proxy%set_dirty()\n"
- " CALL f5_proxy%set_dirty()\n"
- " CALL f5_proxy%set_clean(1)\n"
- " CALL f6_proxy%set_dirty()\n"
- " CALL f6_proxy%set_clean(1)\n"
- " CALL f7_proxy%set_dirty()\n"
- " CALL f8_proxy%set_dirty()\n")
+ " enddo\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the above loop(s)"
+ "\n"
+ " call f1_proxy%set_dirty()\n"
+ " call f1_proxy%set_clean(1)\n"
+ " call f3_proxy%set_dirty()\n"
+ " call f5_proxy%set_dirty()\n"
+ " call f5_proxy%set_clean(1)\n"
+ " call f6_proxy%set_dirty()\n"
+ " call f6_proxy%set_clean(1)\n"
+ " call f7_proxy%set_dirty()\n"
+ " call f8_proxy%set_dirty()\n")
assert expected in generated_code
@@ -2421,7 +2427,7 @@ def test_halo_dirty_3():
api=TEST_API)
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = psy.gen
- assert str(generated_code).count("CALL f1_proxy%set_dirty()") == 2
+ assert str(generated_code).count("call f1_proxy%set_dirty()") == 2
def test_halo_dirty_4():
@@ -2431,14 +2437,14 @@ def test_halo_dirty_4():
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
expected = (
- " END DO\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the above loop\n"
- " !\n"
- " CALL chi_proxy(1)%set_dirty()\n"
- " CALL chi_proxy(2)%set_dirty()\n"
- " CALL chi_proxy(3)%set_dirty()\n"
- " CALL f1_proxy%set_dirty()\n")
+ " enddo\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the above loop(s)"
+ "\n"
+ " call chi_proxy(1)%set_dirty()\n"
+ " call chi_proxy(2)%set_dirty()\n"
+ " call chi_proxy(3)%set_dirty()\n"
+ " call f1_proxy%set_dirty()\n")
assert expected in generated_code
@@ -2472,12 +2478,12 @@ def test_halo_exchange(tmpdir):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
output1 = (
- " IF (f2_proxy%is_dirty(depth=f2_extent + 1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=f2_extent + 1)\n"
- " END IF\n")
+ " if (f2_proxy%is_dirty(depth=f2_extent + 1)) then\n"
+ " call f2_proxy%halo_exchange(depth=f2_extent + 1)\n"
+ " end if\n")
assert output1 in generated_code
assert "loop0_stop = mesh%get_last_halo_cell(1)\n" in generated_code
- assert "DO cell = loop0_start, loop0_stop, 1\n" in generated_code
+ assert "do cell = loop0_start, loop0_stop, 1\n" in generated_code
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -2499,31 +2505,31 @@ def test_halo_exchange_inc(monkeypatch, annexed):
result = str(psy.gen)
output0 = (
- " IF (a_proxy%is_dirty(depth=1)) THEN\n"
- " CALL a_proxy%halo_exchange(depth=1)\n"
- " END IF\n")
+ " if (a_proxy%is_dirty(depth=1)) then\n"
+ " call a_proxy%halo_exchange(depth=1)\n"
+ " end if\n")
output1 = (
- " IF (b_proxy%is_dirty(depth=1)) THEN\n"
- " CALL b_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (d_proxy%is_dirty(depth=1)) THEN\n"
- " CALL d_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (e_proxy(1)%is_dirty(depth=1)) THEN\n"
- " CALL e_proxy(1)%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (e_proxy(2)%is_dirty(depth=1)) THEN\n"
- " CALL e_proxy(2)%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (e_proxy(3)%is_dirty(depth=1)) THEN\n"
- " CALL e_proxy(3)%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " if (b_proxy%is_dirty(depth=1)) then\n"
+ " call b_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (d_proxy%is_dirty(depth=1)) then\n"
+ " call d_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (e_proxy(1)%is_dirty(depth=1)) then\n"
+ " call e_proxy(1)%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (e_proxy(2)%is_dirty(depth=1)) then\n"
+ " call e_proxy(2)%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (e_proxy(3)%is_dirty(depth=1)) then\n"
+ " call e_proxy(3)%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
output2 = (
- " IF (f_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop1_start, loop1_stop, 1\n")
+ " if (f_proxy%is_dirty(depth=1)) then\n"
+ " call f_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop1_start, loop1_stop, 1\n")
assert "loop0_stop = mesh%get_last_halo_cell(1)\n" in result
assert "loop1_stop = mesh%get_last_halo_cell(1)\n" in result
assert output1 in result
@@ -2602,10 +2608,10 @@ def test_halo_exchange_vectors_1(monkeypatch, annexed, tmpdir):
for idx in range(1, 4):
assert "f1_proxy("+str(idx)+")%halo_exchange(depth=1)" in result
assert "loop0_stop = mesh%get_last_halo_cell(1)\n" in result
- expected = (" IF (f1_proxy(3)%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy(3)%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ expected = (" if (f1_proxy(3)%is_dirty(depth=1)) then\n"
+ " call f1_proxy(3)%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
assert expected in result
@@ -2633,11 +2639,10 @@ def test_halo_exchange_vectors(monkeypatch, annexed):
for idx in range(1, 4):
assert ("f2_proxy("+str(idx)+")%halo_exchange("
"depth=f2_extent + 1)" in result)
- expected = (" IF (f2_proxy(4)%is_dirty(depth=f2_extent + 1)) "
- "THEN\n"
- " CALL f2_proxy(4)%halo_exchange(depth=f2_extent + 1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ expected = (" if (f2_proxy(4)%is_dirty(depth=f2_extent + 1)) then\n"
+ " call f2_proxy(4)%halo_exchange(depth=f2_extent + 1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
assert expected in result
@@ -2654,16 +2659,16 @@ def test_halo_exchange_depths(tmpdir):
assert "loop0_stop = mesh%get_last_edge_cell()" in result
- expected = (" IF (f2_proxy%is_dirty(depth=extent)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=extent)\n"
- " END IF\n"
- " IF (f3_proxy%is_dirty(depth=extent)) THEN\n"
- " CALL f3_proxy%halo_exchange(depth=extent)\n"
- " END IF\n"
- " IF (f4_proxy%is_dirty(depth=extent)) THEN\n"
- " CALL f4_proxy%halo_exchange(depth=extent)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ expected = (" if (f2_proxy%is_dirty(depth=extent)) then\n"
+ " call f2_proxy%halo_exchange(depth=extent)\n"
+ " end if\n"
+ " if (f3_proxy%is_dirty(depth=extent)) then\n"
+ " call f3_proxy%halo_exchange(depth=extent)\n"
+ " end if\n"
+ " if (f4_proxy%is_dirty(depth=extent)) then\n"
+ " call f4_proxy%halo_exchange(depth=extent)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
assert expected in result
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -2688,20 +2693,20 @@ def test_halo_exchange_depths_gh_inc(tmpdir, monkeypatch, annexed):
result = str(psy.gen)
expected1 = (
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n")
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n")
expected2 = (
- " IF (f2_proxy%is_dirty(depth=f2_extent + 1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=f2_extent + 1)\n"
- " END IF\n"
- " IF (f3_proxy%is_dirty(depth=f3_extent + 1)) THEN\n"
- " CALL f3_proxy%halo_exchange(depth=f3_extent + 1)\n"
- " END IF\n"
- " IF (f4_proxy%is_dirty(depth=f4_extent + 1)) THEN\n"
- " CALL f4_proxy%halo_exchange(depth=f4_extent + 1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n")
+ " if (f2_proxy%is_dirty(depth=f2_extent + 1)) then\n"
+ " call f2_proxy%halo_exchange(depth=f2_extent + 1)\n"
+ " end if\n"
+ " if (f3_proxy%is_dirty(depth=f3_extent + 1)) then\n"
+ " call f3_proxy%halo_exchange(depth=f3_extent + 1)\n"
+ " end if\n"
+ " if (f4_proxy%is_dirty(depth=f4_extent + 1)) then\n"
+ " call f4_proxy%halo_exchange(depth=f4_extent + 1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n")
if not annexed:
assert expected1 in result
assert expected2 in result
@@ -2746,8 +2751,8 @@ def test_no_mesh_mod(tmpdir):
result = str(psy.gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert "USE mesh_mod, ONLY: mesh_type" not in result
- assert "TYPE(mesh_type), pointer :: mesh => null()" not in result
+ assert "use mesh_mod, only : mesh_type" not in result
+ assert "type(mesh_type), pointer :: mesh => null()" not in result
assert "mesh => a_proxy%vspace%get_mesh()" not in result
@@ -2761,12 +2766,11 @@ def test_mesh_mod(tmpdir):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
result = str(psy.gen)
assert LFRicBuild(tmpdir).code_compiles(psy)
- assert "USE mesh_mod, ONLY: mesh_type" in result
- assert "TYPE(mesh_type), pointer :: mesh => null()" in result
- output = (" !\n"
- " ! Create a mesh object\n"
- " !\n"
- " mesh => a_proxy%vspace%get_mesh()\n")
+ assert "use mesh_mod, only : mesh_type" in result
+ assert "type(mesh_type), pointer :: mesh => null()" in result
+ output = ("\n"
+ " ! Create a mesh object\n"
+ " mesh => a_proxy%vspace%get_mesh()\n")
assert output in result
# When we add build tests we should test that we can we get the mesh
@@ -2806,32 +2810,32 @@ def test_derived_type_arg(dist_mem, tmpdir):
# Check the four integer variables are named and declared correctly
expected = (
- " SUBROUTINE invoke_0(f1, my_obj_iflag, f2, m1, m2, "
+ " subroutine invoke_0(f1, my_obj_iflag, f2, m1, m2, "
"my_obj_get_flag, my_obj_get_flag_1, my_obj_get_flag_2)\n")
assert expected in gen
- expected = (
- " INTEGER(KIND=i_def), intent(in) :: my_obj_iflag, "
- "my_obj_get_flag, my_obj_get_flag_1, my_obj_get_flag_2\n")
- assert expected in gen
+ assert "integer(kind=i_def), intent(in) :: my_obj_iflag" in gen
+ assert "integer(kind=i_def), intent(in) :: my_obj_get_flag" in gen
+ assert "integer(kind=i_def), intent(in) :: my_obj_get_flag_1" in gen
+ assert "integer(kind=i_def), intent(in) :: my_obj_get_flag_2" in gen
# Check that they are still named correctly when passed to the
# kernels
assert (
- "CALL testkern_one_int_scalar_code(nlayers_f1, f1_data, "
+ "call testkern_one_int_scalar_code(nlayers_f1, f1_data, "
"my_obj_iflag, f2_data, m1_data, m2_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), "
"ndf_w3, undf_w3, map_w3(:,cell))" in gen)
assert (
- "CALL testkern_one_int_scalar_code(nlayers_f1, f1_data, "
+ "call testkern_one_int_scalar_code(nlayers_f1, f1_data, "
"my_obj_get_flag, f2_data, m1_data, m2_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), "
"ndf_w3, undf_w3, map_w3(:,cell))" in gen)
assert (
- "CALL testkern_one_int_scalar_code(nlayers_f1, f1_data, "
+ "call testkern_one_int_scalar_code(nlayers_f1, f1_data, "
"my_obj_get_flag_1, f2_data, m1_data, m2_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), "
"ndf_w3, undf_w3, map_w3(:,cell))" in gen)
assert (
- "CALL testkern_one_int_scalar_code(nlayers_f1, f1_data, "
+ "call testkern_one_int_scalar_code(nlayers_f1, f1_data, "
"my_obj_get_flag_2, f2_data, m1_data, m2_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), "
"ndf_w3, undf_w3, map_w3(:,cell))" in gen)
@@ -2853,32 +2857,32 @@ def test_multiple_derived_type_args(dist_mem, tmpdir):
# Check the four integer variables are named and declared correctly
expected = (
- " SUBROUTINE invoke_0(f1, obj_a_iflag, f2, m1, m2, "
+ " subroutine invoke_0(f1, obj_a_iflag, f2, m1, m2, "
"obj_b_iflag, obj_a_obj_b_iflag, obj_b_obj_a_iflag)\n")
assert expected in gen
- expected = (
- " INTEGER(KIND=i_def), intent(in) :: obj_a_iflag, obj_b_iflag, "
- "obj_a_obj_b_iflag, obj_b_obj_a_iflag\n")
- assert expected in gen
+ assert "integer(kind=i_def), intent(in) :: obj_a_iflag" in gen
+ assert "integer(kind=i_def), intent(in) :: obj_b_iflag" in gen
+ assert "integer(kind=i_def), intent(in) :: obj_a_obj_b_iflag" in gen
+ assert "integer(kind=i_def), intent(in) :: obj_b_obj_a_iflag" in gen
# Check that they are still named correctly when passed to the
# kernels
assert (
- "CALL testkern_one_int_scalar_code(nlayers_f1, f1_data, "
+ "call testkern_one_int_scalar_code(nlayers_f1, f1_data, "
"obj_a_iflag, f2_data, m1_data, m2_data, ndf_w1, "
"undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, "
"undf_w3, map_w3(:,cell))" in gen)
assert (
- "CALL testkern_one_int_scalar_code(nlayers_f1, f1_data, "
+ "call testkern_one_int_scalar_code(nlayers_f1, f1_data, "
"obj_b_iflag, f2_data, m1_data, m2_data, ndf_w1, "
"undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, "
"undf_w3, map_w3(:,cell))" in gen)
assert (
- "CALL testkern_one_int_scalar_code(nlayers_f1, f1_data, "
+ "call testkern_one_int_scalar_code(nlayers_f1, f1_data, "
"obj_a_obj_b_iflag, f2_data, m1_data, m2_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), "
"ndf_w3, undf_w3, map_w3(:,cell))" in gen)
assert (
- "CALL testkern_one_int_scalar_code(nlayers_f1, f1_data, "
+ "call testkern_one_int_scalar_code(nlayers_f1, f1_data, "
"obj_b_obj_a_iflag, f2_data, m1_data, m2_data, "
"ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), "
"ndf_w3, undf_w3, map_w3(:,cell))" in gen)
@@ -3116,65 +3120,56 @@ def test_multi_anyw2(dist_mem, tmpdir):
if dist_mem:
output = (
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_any_w2 => f1_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for any_w2\n"
- " !\n"
- " ndf_any_w2 = f1_proxy%vspace%get_ndf()\n"
- " undf_any_w2 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = mesh%get_last_halo_cell(1)\n"
- " !\n"
- " ! Call kernels and communication routines\n"
- " !\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f1_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f3_proxy%is_dirty(depth=1)) THEN\n"
- " CALL f3_proxy%halo_exchange(depth=1)\n"
- " END IF\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_multi_anyw2_code(nlayers_f1, "
+ " ! Look-up dofmaps for each function space\n"
+ " map_any_w2 => f1_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for any_w2\n"
+ " ndf_any_w2 = f1_proxy%vspace%get_ndf()\n"
+ " undf_any_w2 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = mesh%get_last_halo_cell(1)\n"
+ "\n"
+ " ! Call kernels and communication routines\n"
+ " if (f1_proxy%is_dirty(depth=1)) then\n"
+ " call f1_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy%is_dirty(depth=1)) then\n"
+ " call f2_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f3_proxy%is_dirty(depth=1)) then\n"
+ " call f3_proxy%halo_exchange(depth=1)\n"
+ " end if\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_multi_anyw2_code(nlayers_f1, "
"f1_data, f2_data, f3_data, ndf_any_w2, "
"undf_any_w2, map_any_w2(:,cell))\n"
- " END DO\n"
- " !\n"
- " ! Set halos dirty/clean for fields modified in the "
- "above loop\n"
- " !\n"
- " CALL f1_proxy%set_dirty()")
+ " enddo\n"
+ "\n"
+ " ! Set halos dirty/clean for fields modified in the "
+ "above loop(s)\n"
+ " call f1_proxy%set_dirty()")
assert output in generated_code
else:
output = (
- " ! Look-up dofmaps for each function space\n"
- " !\n"
- " map_any_w2 => f1_proxy%vspace%get_whole_dofmap()\n"
- " !\n"
- " ! Initialise number of DoFs for any_w2\n"
- " !\n"
- " ndf_any_w2 = f1_proxy%vspace%get_ndf()\n"
- " undf_any_w2 = f1_proxy%vspace%get_undf()\n"
- " !\n"
- " ! Set-up all of the loop bounds\n"
- " !\n"
- " loop0_start = 1\n"
- " loop0_stop = f1_proxy%vspace%get_ncell()\n"
- " !\n"
- " ! Call our kernels\n"
- " !\n"
- " DO cell = loop0_start, loop0_stop, 1\n"
- " CALL testkern_multi_anyw2_code(nlayers_f1, "
+ " ! Look-up dofmaps for each function space\n"
+ " map_any_w2 => f1_proxy%vspace%get_whole_dofmap()\n"
+ "\n"
+ " ! Initialise number of DoFs for any_w2\n"
+ " ndf_any_w2 = f1_proxy%vspace%get_ndf()\n"
+ " undf_any_w2 = f1_proxy%vspace%get_undf()\n"
+ "\n"
+ " ! Set-up all of the loop bounds\n"
+ " loop0_start = 1\n"
+ " loop0_stop = f1_proxy%vspace%get_ncell()\n"
+ "\n"
+ " ! Call kernels\n"
+ " do cell = loop0_start, loop0_stop, 1\n"
+ " call testkern_multi_anyw2_code(nlayers_f1, "
"f1_data, f2_data, f3_data, ndf_any_w2, "
"undf_any_w2, map_any_w2(:,cell))\n"
- " END DO")
+ " enddo")
assert output in generated_code
@@ -3208,19 +3203,17 @@ def test_anyw2_operators(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
output = (
- " ! Initialise number of DoFs for any_w2\n"
- " !\n"
- " ndf_any_w2 = mm_w2_proxy%fs_from%get_ndf()\n"
- " undf_any_w2 = mm_w2_proxy%fs_from%get_undf()\n")
+ " ! Initialise number of DoFs for any_w2\n"
+ " ndf_any_w2 = mm_w2_proxy%fs_from%get_ndf()\n"
+ " undf_any_w2 = mm_w2_proxy%fs_from%get_undf()\n")
assert output in generated_code
output = (
- " dim_any_w2 = mm_w2_proxy%fs_from%get_dim_space()\n"
- " ALLOCATE (basis_any_w2_qr(dim_any_w2, ndf_any_w2, "
- "np_xy_qr, np_z_qr))\n"
- " !\n"
- " ! Compute basis/diff-basis arrays\n"
- " !\n"
- " CALL qr%compute_function(BASIS, mm_w2_proxy%fs_from, "
+ " dim_any_w2 = mm_w2_proxy%fs_from%get_dim_space()\n"
+ " ALLOCATE(basis_any_w2_qr(dim_any_w2,ndf_any_w2,"
+ "np_xy_qr,np_z_qr))\n"
+ "\n"
+ " ! Compute basis/diff-basis arrays\n"
+ " call qr%compute_function(BASIS, mm_w2_proxy%fs_from, "
"dim_any_w2, ndf_any_w2, basis_any_w2_qr)")
assert output in generated_code
@@ -3238,13 +3231,12 @@ def test_anyw2_stencils(dist_mem, tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
output = (
- " ! Initialise stencil dofmaps\n"
- " !\n"
- " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap"
- "(STENCIL_CROSS,extent)\n"
- " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
- " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
- " !\n")
+ " ! Initialise stencil dofmaps\n"
+ " f2_stencil_map => f2_proxy%vspace%get_stencil_dofmap"
+ "(STENCIL_CROSS, extent)\n"
+ " f2_stencil_dofmap => f2_stencil_map%get_whole_dofmap()\n"
+ " f2_stencil_size => f2_stencil_map%get_stencil_sizes()\n"
+ "\n")
assert output in generated_code
@@ -3373,11 +3365,11 @@ def test_HaloReadAccess_input_field():
object as input. If this is not the case an exception is raised. This
test checks that this exception is raised correctly.'''
with pytest.raises(GenerationError) as excinfo:
- _ = HaloReadAccess(None, None)
+ _ = HaloReadAccess(None, Schedule())
assert (
f"Generation Error: HaloInfo class expects an argument of type "
f"DynArgument, or equivalent, on initialisation, but found, "
- f"'{type(None)}'" in str(excinfo.value))
+ f"'{type(None)}'" == str(excinfo.value))
def test_HaloReadAccess_field_in_call():
@@ -3394,7 +3386,7 @@ def test_HaloReadAccess_field_in_call():
halo_exchange = schedule.children[0]
field = halo_exchange.field
with pytest.raises(GenerationError) as excinfo:
- _ = HaloReadAccess(field, None)
+ _ = HaloReadAccess(field, Schedule())
assert ("field 'f1' should be from a call but found "
""
in str(excinfo.value))
@@ -3416,7 +3408,7 @@ def test_HaloReadAccess_field_not_reader():
kernel = loop.loop_body[0]
argument = kernel.arguments.args[0]
with pytest.raises(GenerationError) as excinfo:
- _ = HaloReadAccess(argument, None)
+ _ = HaloReadAccess(argument, Schedule())
assert (
"In HaloInfo class, field 'f1' should be one of ['gh_read', "
"'gh_readwrite', 'gh_inc', 'gh_readinc'], but found 'gh_write'"
@@ -3459,7 +3451,7 @@ def test_HaloReadAccess_discontinuous_field(tmpdir):
loop = schedule.children[0]
kernel = loop.loop_body[0]
arg = kernel.arguments.args[1]
- halo_access = HaloReadAccess(arg, schedule.symbol_table)
+ halo_access = HaloReadAccess(arg, schedule)
assert not halo_access.max_depth
assert halo_access.var_depth is None
assert halo_access.stencil_type is None
@@ -3628,11 +3620,11 @@ def test_no_halo_exchange_annex_dofs(tmpdir, monkeypatch,
assert LFRicBuild(tmpdir).code_compiles(psy)
if annexed:
- assert "CALL f1_proxy%halo_exchange" not in result
- assert "CALL f2_proxy%halo_exchange" not in result
+ assert "call f1_proxy%halo_exchange" not in result
+ assert "call f2_proxy%halo_exchange" not in result
else:
- assert "CALL f1_proxy%halo_exchange" in result
- assert "CALL f2_proxy%halo_exchange" in result
+ assert "call f1_proxy%halo_exchange" in result
+ assert "call f2_proxy%halo_exchange" in result
def test_annexed_default():
@@ -3683,23 +3675,6 @@ def test_lfriccollection_err1():
in str(err.value))
-def test_lfriccollection_err2(monkeypatch):
- ''' Check that the LFRicCollection constructor raises the expected
- error if it is not provided with an LFRicKern or LFRicInvoke. '''
-
- _, info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
- api=TEST_API)
- psy = PSyFactory(TEST_API, distributed_memory=True).create(info)
- invoke = psy.invokes.invoke_list[0]
- # Obtain a valid sub-class of a LFRicCollection
- proxies = invoke.proxies
- # Monkeypatch it to break internal state
- monkeypatch.setattr(proxies, "_invoke", None)
- with pytest.raises(InternalError) as err:
- proxies.declarations(ModuleGen(name="testmodule"))
- assert "LFRicCollection has neither a Kernel nor an Invoke" \
- in str(err.value)
-
# tests for class kerncallarglist position methods
@@ -3916,53 +3891,51 @@ def test_lfricinvoke_runtime(tmpdir, monkeypatch):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
assert LFRicBuild(tmpdir).code_compiles(psy)
generated_code = str(psy.gen)
- expected1 = (
- " USE testkern_mod, ONLY: testkern_code\n"
- " USE log_mod, ONLY: log_event, LOG_LEVEL_ERROR\n"
- " USE fs_continuity_mod\n"
- " USE mesh_mod, ONLY: mesh_type\n")
- assert expected1 in generated_code
- expected2 = (
- " m2_proxy = m2%get_proxy()\n"
- " m2_data => m2_proxy%data\n"
- " !\n"
- " ! Perform run-time checks\n"
- " !\n"
- " ! Check field function space and kernel metadata function spac"
+ assert "use testkern_mod, only : testkern_code" in generated_code
+ assert "use log_mod, only : LOG_LEVEL_ERROR, log_event" in generated_code
+ assert "use fs_continuity_mod" in generated_code
+ assert "use mesh_mod, only : mesh_type" in generated_code
+ expected = (
+ " m2_proxy = m2%get_proxy()\n"
+ " m2_data => m2_proxy%data\n"
+ "\n"
+ " ! Perform run-time checks\n"
+ " ! Check field function space and kernel metadata function spac"
"es are compatible\n"
- " IF (f1%which_function_space() /= W1) THEN\n"
- " CALL log_event(\"In alg 'single_invoke' invoke 'invoke_0_tes"
+ " if (f1%which_function_space() /= W1) then\n"
+ " call log_event(\"In alg 'single_invoke' invoke 'invoke_0_tes"
"tkern_type', the field 'f1' is passed to kernel 'testkern_code' but "
"its function space is not compatible with the function space specifi"
"ed in the kernel metadata 'w1'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (f2%which_function_space() /= W2) THEN\n"
- " CALL log_event(\"In alg 'single_invoke' invoke 'invoke_0_tes"
+ " end if\n"
+ " if (f2%which_function_space() /= W2) then\n"
+ " call log_event(\"In alg 'single_invoke' invoke 'invoke_0_tes"
"tkern_type', the field 'f2' is passed to kernel 'testkern_code' but "
"its function space is not compatible with the function space specifi"
"ed in the kernel metadata 'w2'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (m1%which_function_space() /= W2) THEN\n"
- " CALL log_event(\"In alg 'single_invoke' invoke 'invoke_0_tes"
+ " end if\n"
+ " if (m1%which_function_space() /= W2) then\n"
+ " call log_event(\"In alg 'single_invoke' invoke 'invoke_0_tes"
"tkern_type', the field 'm1' is passed to kernel 'testkern_code' but "
"its function space is not compatible with the function space specifi"
"ed in the kernel metadata 'w2'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (m2%which_function_space() /= W3) THEN\n"
- " CALL log_event(\"In alg 'single_invoke' invoke 'invoke_0_tes"
+ " end if\n"
+ " if (m2%which_function_space() /= W3) then\n"
+ " call log_event(\"In alg 'single_invoke' invoke 'invoke_0_tes"
"tkern_type', the field 'm2' is passed to kernel 'testkern_code' but "
"its function space is not compatible with the function space specifi"
"ed in the kernel metadata 'w3'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " ! Check that read-only fields are not modified\n"
- " IF (f1_proxy%vspace%is_readonly()) THEN\n"
- " CALL log_event(\"In alg 'single_invoke' invoke 'invoke_0_tes"
+ " end if\n"
+ "\n"
+ " ! Check that read-only fields are not modified\n"
+ " if (f1_proxy%vspace%is_readonly()) then\n"
+ " call log_event(\"In alg 'single_invoke' invoke 'invoke_0_tes"
"tkern_type', field 'f1' is on a read-only function space but is modi"
"fied by kernel 'testkern_code'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " !\n"
- " ! Initialise number of layers\n")
- assert expected2 in generated_code
+ " end if\n"
+ "\n"
+ " ! Initialise number of layers\n")
+ assert expected in generated_code
def test_dynruntimechecks_anyspace(tmpdir, monkeypatch):
@@ -3979,36 +3952,34 @@ def test_dynruntimechecks_anyspace(tmpdir, monkeypatch):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
assert LFRicBuild(tmpdir).code_compiles(psy)
generated_code = str(psy.gen)
- expected1 = (
- " USE function_space_mod, ONLY: BASIS, DIFF_BASIS\n"
- " USE log_mod, ONLY: log_event, LOG_LEVEL_ERROR\n"
- " USE fs_continuity_mod\n"
- " USE mesh_mod, ONLY: mesh_type")
- assert expected1 in generated_code
+ assert "use function_space_mod, only : BASIS, DIFF_BASIS" in generated_code
+ assert "use log_mod, only : LOG_LEVEL_ERROR, log_event" in generated_code
+ assert "use fs_continuity_mod, only : W0\n" in generated_code
+ assert "use mesh_mod, only : mesh_type" in generated_code
expected2 = (
- " c_proxy(3) = c(3)%get_proxy()\n"
- " c_3_data => c_proxy(3)%data\n"
- " !\n"
- " ! Perform run-time checks\n"
- " !\n"
- " ! Check field function space and kernel metadata function spac"
+ " c_proxy(3) = c(3)%get_proxy()\n"
+ " c_3_data => c_proxy(3)%data\n"
+ "\n"
+ " ! Perform run-time checks\n"
+ " ! Check field function space and kernel metadata function spac"
"es are compatible\n"
- " IF (c(1)%which_function_space() /= W0) THEN\n"
- " CALL log_event(\"In alg 'any_space_example' invoke 'invoke_0"
+ " if (c(1)%which_function_space() /= W0) then\n"
+ " call log_event(\"In alg 'any_space_example' invoke 'invoke_0"
"_testkern_any_space_1_type', the field 'c' is passed to kernel 'test"
"kern_any_space_1_code' but its function space is not compatible with"
" the function space specified in the kernel metadata 'w0'.\", LOG_LE"
"VEL_ERROR)\n"
- " END IF\n"
- " ! Check that read-only fields are not modified\n"
- " IF (a_proxy%vspace%is_readonly()) THEN\n"
- " CALL log_event(\"In alg 'any_space_example' invoke 'invoke_0"
+ " end if\n"
+ "\n"
+ " ! Check that read-only fields are not modified\n"
+ " if (a_proxy%vspace%is_readonly()) then\n"
+ " call log_event(\"In alg 'any_space_example' invoke 'invoke_0"
"_testkern_any_space_1_type', field 'a' is on a read-only function sp"
"ace but is modified by kernel 'testkern_any_space_1_code'.\", LOG_LE"
"VEL_ERROR)\n"
- " END IF\n"
- " !\n"
- " ! Initialise number of layers\n")
+ " end if\n"
+ "\n"
+ " ! Initialise number of layers\n")
assert expected2 in generated_code
@@ -4025,49 +3996,48 @@ def test_dynruntimechecks_vector(tmpdir, monkeypatch):
assert LFRicBuild(tmpdir).code_compiles(psy)
generated_code = str(psy.gen)
- expected1 = (
- " USE testkern_coord_w0_2_mod, ONLY: testkern_coord_w0_2_code\n"
- " USE log_mod, ONLY: log_event, LOG_LEVEL_ERROR\n"
- " USE fs_continuity_mod\n"
- " USE mesh_mod, ONLY: mesh_type\n")
- assert expected1 in generated_code
+ assert ("use testkern_coord_w0_2_mod, only : testkern_coord_w0_2_code"
+ in generated_code)
+ assert "use log_mod, only : LOG_LEVEL_ERROR, log_event" in generated_code
+ assert "use fs_continuity_mod, only : W0\n" in generated_code
+ assert "use mesh_mod, only : mesh_type" in generated_code
expected2 = (
- " f1_proxy = f1%get_proxy()\n"
- " f1_data => f1_proxy%data\n"
- " !\n"
- " ! Perform run-time checks\n"
- " !\n"
- " ! Check field function space and kernel metadata function spac"
+ " f1_proxy = f1%get_proxy()\n"
+ " f1_data => f1_proxy%data\n"
+ "\n"
+ " ! Perform run-time checks\n"
+ " ! Check field function space and kernel metadata function spac"
"es are compatible\n"
- " IF (chi(1)%which_function_space() /= W0) THEN\n"
- " CALL log_event(\"In alg 'vector_field' invoke 'invoke_0_test"
+ " if (chi(1)%which_function_space() /= W0) then\n"
+ " call log_event(\"In alg 'vector_field' invoke 'invoke_0_test"
"kern_coord_w0_2_type', the field 'chi' is passed to kernel 'testkern"
"_coord_w0_2_code' but its function space is not compatible with the "
"function space specified in the kernel metadata 'w0'.\", "
"LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (f1%which_function_space() /= W0) THEN\n"
- " CALL log_event(\"In alg 'vector_field' invoke 'invoke_0_test"
+ " end if\n"
+ " if (f1%which_function_space() /= W0) then\n"
+ " call log_event(\"In alg 'vector_field' invoke 'invoke_0_test"
"kern_coord_w0_2_type', the field 'f1' is passed to kernel 'testkern_"
"coord_w0_2_code' but its function space is not compatible with the "
"function space specified in the kernel metadata 'w0'.\", "
"LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " ! Check that read-only fields are not modified\n"
- " IF (chi_proxy(1)%vspace%is_readonly()) THEN\n"
- " CALL log_event(\"In alg 'vector_field' invoke 'invoke_0_test"
+ " end if\n"
+ "\n"
+ " ! Check that read-only fields are not modified\n"
+ " if (chi_proxy(1)%vspace%is_readonly()) then\n"
+ " call log_event(\"In alg 'vector_field' invoke 'invoke_0_test"
"kern_coord_w0_2_type', field 'chi' is on a read-only function space "
"but is modified by kernel 'testkern_coord_w0_2_code'.\", "
"LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (f1_proxy%vspace%is_readonly()) THEN\n"
- " CALL log_event(\"In alg 'vector_field' invoke 'invoke_0_test"
+ " end if\n"
+ " if (f1_proxy%vspace%is_readonly()) then\n"
+ " call log_event(\"In alg 'vector_field' invoke 'invoke_0_test"
"kern_coord_w0_2_type', field 'f1' is on a read-only function space "
"but is modified by kernel 'testkern_coord_w0_2_code'.\", "
"LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " !\n"
- " ! Initialise number of layers\n")
+ " end if\n"
+ "\n"
+ " ! Initialise number of layers\n")
assert expected2 in generated_code
@@ -4087,70 +4057,68 @@ def test_dynruntimechecks_multikern(tmpdir, monkeypatch):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
assert LFRicBuild(tmpdir).code_compiles(psy)
generated_code = str(psy.gen)
- expected1 = (
- " USE testkern_mod, ONLY: testkern_code\n"
- " USE log_mod, ONLY: log_event, LOG_LEVEL_ERROR\n"
- " USE fs_continuity_mod\n"
- " USE mesh_mod, ONLY: mesh_type\n")
- assert expected1 in generated_code
+ assert "use testkern_mod, only : testkern_code" in generated_code
+ assert "use log_mod, only : LOG_LEVEL_ERROR, log_event" in generated_code
+ assert "use fs_continuity_mod"
+ assert "use mesh_mod, only : mesh_type" in generated_code
expected2 = (
- " f3_proxy = f3%get_proxy()\n"
- " f3_data => f3_proxy%data\n"
- " !\n"
- " ! Perform run-time checks\n"
- " !\n"
- " ! Check field function space and kernel metadata function spac"
+ " f3_proxy = f3%get_proxy()\n"
+ " f3_data => f3_proxy%data\n"
+ "\n"
+ " ! Perform run-time checks\n"
+ " ! Check field function space and kernel metadata function spac"
"es are compatible\n"
- " IF (f1%which_function_space() /= W1) THEN\n"
- " CALL log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
+ " if (f1%which_function_space() /= W1) then\n"
+ " call log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
"e field 'f1' is passed to kernel 'testkern_code' but its function sp"
"ace is not compatible with the function space specified in the kerne"
"l metadata 'w1'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (f2%which_function_space() /= W2) THEN\n"
- " CALL log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
+ " end if\n"
+ " if (f2%which_function_space() /= W2) then\n"
+ " call log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
"e field 'f2' is passed to kernel 'testkern_code' but its function sp"
"ace is not compatible with the function space specified in the kerne"
"l metadata 'w2'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (m1%which_function_space() /= W2) THEN\n"
- " CALL log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
+ " end if\n"
+ " if (m1%which_function_space() /= W2) then\n"
+ " call log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
"e field 'm1' is passed to kernel 'testkern_code' but its function sp"
"ace is not compatible with the function space specified in the kerne"
"l metadata 'w2'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (m2%which_function_space() /= W3) THEN\n"
- " CALL log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
+ " end if\n"
+ " if (m2%which_function_space() /= W3) then\n"
+ " call log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
"e field 'm2' is passed to kernel 'testkern_code' but its function sp"
"ace is not compatible with the function space specified in the kerne"
"l metadata 'w3'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (f3%which_function_space() /= W2) THEN\n"
- " CALL log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
+ " end if\n"
+ " if (f3%which_function_space() /= W2) then\n"
+ " call log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
"e field 'f3' is passed to kernel 'testkern_code' but its function sp"
"ace is not compatible with the function space specified in the kerne"
"l metadata 'w2'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (m2%which_function_space() /= W2) THEN\n"
- " CALL log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
+ " end if\n"
+ " if (m2%which_function_space() /= W2) then\n"
+ " call log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
"e field 'm2' is passed to kernel 'testkern_code' but its function sp"
"ace is not compatible with the function space specified in the kerne"
"l metadata 'w2'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (m1%which_function_space() /= W3) THEN\n"
- " CALL log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
+ " end if\n"
+ " if (m1%which_function_space() /= W3) then\n"
+ " call log_event(\"In alg 'multi_invoke' invoke 'invoke_0', th"
"e field 'm1' is passed to kernel 'testkern_code' but its function sp"
"ace is not compatible with the function space specified in the kerne"
"l metadata 'w3'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " ! Check that read-only fields are not modified\n"
- " IF (f1_proxy%vspace%is_readonly()) THEN\n"
- " CALL log_event(\"In alg 'multi_invoke' invoke 'invoke_0', fi"
+ " end if\n"
+ "\n"
+ " ! Check that read-only fields are not modified\n"
+ " if (f1_proxy%vspace%is_readonly()) then\n"
+ " call log_event(\"In alg 'multi_invoke' invoke 'invoke_0', fi"
"eld 'f1' is on a read-only function space but is modified by kernel "
"'testkern_code'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " !\n"
- " ! Initialise number of layers\n")
+ " end if\n"
+ "\n"
+ " ! Initialise number of layers\n")
assert expected2 in generated_code
@@ -4166,27 +4134,23 @@ def test_dynruntimechecks_builtins(tmpdir, monkeypatch):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
assert LFRicBuild(tmpdir).code_compiles(psy)
generated_code = str(psy.gen)
- expected_code1 = (
- " USE log_mod, ONLY: log_event, LOG_LEVEL_ERROR\n"
- " USE fs_continuity_mod\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " TYPE(field_type), intent(in) :: f3, f1, f2\n")
- assert expected_code1 in generated_code
+ assert "use log_mod, only : LOG_LEVEL_ERROR, log_event" in generated_code
+ assert "use fs_continuity_mod\n"
+ assert "use mesh_mod, only : mesh_type" in generated_code
+ assert "type(field_type), intent(in) :: f3" in generated_code
expected_code2 = (
- " f2_proxy = f2%get_proxy()\n"
- " f2_data => f2_proxy%data\n"
- " !\n"
- " ! Perform run-time checks\n"
- " !\n"
- " ! Check field function space and kernel metadata function spac"
- "es are compatible\n"
- " ! Check that read-only fields are not modified\n"
- " IF (f3_proxy%vspace%is_readonly()) THEN\n"
- " CALL log_event(\"In alg 'single_invoke' invoke 'invoke_0', f"
+ " f2_proxy = f2%get_proxy()\n"
+ " f2_data => f2_proxy%data\n"
+ "\n"
+ " ! Perform run-time checks\n"
+ " ! Check that read-only fields are not modified\n"
+ " if (f3_proxy%vspace%is_readonly()) then\n"
+ " call log_event(\"In alg 'single_invoke' invoke 'invoke_0', f"
"ield 'f3' is on a read-only function space but is modified by kernel"
- " 'x_plus_y'.\", LOG_LEVEL_ERROR)\n" " END IF\n"
- " !\n"
- " ! Create a mesh object\n")
+ " 'x_plus_y'.\", LOG_LEVEL_ERROR)\n"
+ " end if\n"
+ "\n"
+ " ! Create a mesh object\n")
assert expected_code2 in generated_code
@@ -4206,52 +4170,49 @@ def test_dynruntimechecks_anydiscontinuous(tmpdir, monkeypatch):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
assert LFRicBuild(tmpdir).code_compiles(psy)
generated_code = str(psy.gen)
- expected1 = (
- " USE testkern_any_discontinuous_space_op_1_mod, ONLY: testkern_"
- "any_discontinuous_space_op_1_code\n"
- " USE log_mod, ONLY: log_event, LOG_LEVEL_ERROR\n"
- " USE fs_continuity_mod\n"
- " USE mesh_mod, ONLY: mesh_type\n")
- assert expected1 in generated_code
+ assert ("use testkern_any_discontinuous_space_op_1_mod, only : testkern_"
+ "any_discontinuous_space_op_1_code") in generated_code
+ assert "use log_mod, only : LOG_LEVEL_ERROR, log_event" in generated_code
+ assert "use mesh_mod, only : mesh_type" in generated_code
expected2 = (
- " op4_proxy = op4%get_proxy()\n"
- " op4_local_stencil => op4_proxy%local_stencil\n"
- " !\n"
- " ! Perform run-time checks\n"
- " !\n"
- " ! Check field function space and kernel metadata function spac"
+ " op4_proxy = op4%get_proxy()\n"
+ " op4_local_stencil => op4_proxy%local_stencil\n"
+ "\n"
+ " ! Perform run-time checks\n"
+ " ! Check field function space and kernel metadata function spac"
"es are compatible\n"
- " IF (f1(1)%which_function_space() /= W3 .and. f1(1)%which_funct"
- "ion_space() /= WTHETA .and. f1(1)%which_function_space() /= W2V .and"
- ". f1(1)%which_function_space() /= W2VTRACE .and. f1(1)%which_funct"
- "ion_space() /= W2BROKEN) THEN\n"
- " CALL log_event(\"In alg 'any_discontinuous_space_op_example_"
+ " if (f1(1)%which_function_space() /= W3 .AND. f1(1)%which_funct"
+ "ion_space() /= WTHETA .AND. f1(1)%which_function_space() /= W2V .AND"
+ ". f1(1)%which_function_space() /= W2VTRACE .AND. f1(1)%which_funct"
+ "ion_space() /= W2BROKEN) then\n"
+ " call log_event(\"In alg 'any_discontinuous_space_op_example_"
"1' invoke 'invoke_0_testkern_any_discontinuous_space_op_1_type', the"
" field 'f1' is passed to kernel 'testkern_any_discontinuous_space_op"
"_1_code' but its function space is not compatible with the function "
"space specified in the kernel metadata 'any_discontinuous_space_1'."
"\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (f2%which_function_space() /= W3 .and. f2%which_function_sp"
- "ace() /= WTHETA .and. f2%which_function_space() /= W2V .and. f2%whic"
- "h_function_space() /= W2VTRACE .and. f2%which_function_space() /= "
- "W2BROKEN) THEN\n"
- " CALL log_event(\"In alg 'any_discontinuous_space_op_example_"
+ " end if\n"
+ " if (f2%which_function_space() /= W3 .AND. f2%which_function_sp"
+ "ace() /= WTHETA .AND. f2%which_function_space() /= W2V .AND. f2%whic"
+ "h_function_space() /= W2VTRACE .AND. f2%which_function_space() /= "
+ "W2BROKEN) then\n"
+ " call log_event(\"In alg 'any_discontinuous_space_op_example_"
"1' invoke 'invoke_0_testkern_any_discontinuous_space_op_1_type', the"
" field 'f2' is passed to kernel 'testkern_any_discontinuous_space_op"
"_1_code' but its function space is not compatible with the function "
"space specified in the kernel metadata 'any_discontinuous_space_2'."
"\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " ! Check that read-only fields are not modified\n"
- " IF (f2_proxy%vspace%is_readonly()) THEN\n"
- " CALL log_event(\"In alg 'any_discontinuous_space_op_example_"
+ " end if\n"
+ "\n"
+ " ! Check that read-only fields are not modified\n"
+ " if (f2_proxy%vspace%is_readonly()) then\n"
+ " call log_event(\"In alg 'any_discontinuous_space_op_example_"
"1' invoke 'invoke_0_testkern_any_discontinuous_space_op_1_type', fie"
"ld 'f2' is on a read-only function space but is modified by kernel '"
"testkern_any_discontinuous_space_op_1_code'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " !\n"
- " ! Initialise number of layers\n")
+ " end if\n"
+ "\n"
+ " ! Initialise number of layers\n")
assert expected2 in generated_code
@@ -4271,54 +4232,51 @@ def test_dynruntimechecks_anyw2(tmpdir, monkeypatch):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
assert LFRicBuild(tmpdir).code_compiles(psy)
generated_code = str(psy.gen)
- expected1 = (
- " USE testkern_multi_anyw2_mod, ONLY: testkern_multi_anyw2_code\n"
- " USE log_mod, ONLY: log_event, LOG_LEVEL_ERROR\n"
- " USE fs_continuity_mod\n"
- " USE mesh_mod, ONLY: mesh_type\n")
- assert expected1 in generated_code
+ assert ("use testkern_multi_anyw2_mod, only : testkern_multi_anyw2_code\n"
+ in generated_code)
+ assert "use log_mod, only : LOG_LEVEL_ERROR, log_event" in generated_code
expected2 = (
- " !\n"
- " ! Perform run-time checks\n"
- " !\n"
- " ! Check field function space and kernel metadata function spac"
+ "\n"
+ " ! Perform run-time checks\n"
+ " ! Check field function space and kernel metadata function spac"
"es are compatible\n"
- " IF (f1%which_function_space() /= W2 .and. f1%which_function_sp"
- "ace() /= W2H .and. f1%which_function_space() /= W2V .and. f1%which_f"
- "unction_space() /= W2BROKEN) THEN\n"
- " CALL log_event(\"In alg 'single_invoke_multi_anyw2' invoke '"
+ " if (f1%which_function_space() /= W2 .AND. f1%which_function_sp"
+ "ace() /= W2H .AND. f1%which_function_space() /= W2V .AND. f1%which_f"
+ "unction_space() /= W2BROKEN) then\n"
+ " call log_event(\"In alg 'single_invoke_multi_anyw2' invoke '"
"invoke_0_testkern_multi_anyw2_type', the field 'f1' is passed to ker"
"nel 'testkern_multi_anyw2_code' but its function space is not compat"
"ible with the function space specified in the kernel metadata 'any_w"
"2'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (f2%which_function_space() /= W2 .and. f2%which_function_sp"
- "ace() /= W2H .and. f2%which_function_space() /= W2V .and. f2%which_f"
- "unction_space() /= W2BROKEN) THEN\n"
- " CALL log_event(\"In alg 'single_invoke_multi_anyw2' invoke '"
+ " end if\n"
+ " if (f2%which_function_space() /= W2 .AND. f2%which_function_sp"
+ "ace() /= W2H .AND. f2%which_function_space() /= W2V .AND. f2%which_f"
+ "unction_space() /= W2BROKEN) then\n"
+ " call log_event(\"In alg 'single_invoke_multi_anyw2' invoke '"
"invoke_0_testkern_multi_anyw2_type', the field 'f2' is passed to ker"
"nel 'testkern_multi_anyw2_code' but its function space is not compat"
"ible with the function space specified in the kernel metadata 'any_w"
"2'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " IF (f3%which_function_space() /= W2 .and. f3%which_function_sp"
- "ace() /= W2H .and. f3%which_function_space() /= W2V .and. f3%which_f"
- "unction_space() /= W2BROKEN) THEN\n"
- " CALL log_event(\"In alg 'single_invoke_multi_anyw2' invoke '"
+ " end if\n"
+ " if (f3%which_function_space() /= W2 .AND. f3%which_function_sp"
+ "ace() /= W2H .AND. f3%which_function_space() /= W2V .AND. f3%which_f"
+ "unction_space() /= W2BROKEN) then\n"
+ " call log_event(\"In alg 'single_invoke_multi_anyw2' invoke '"
"invoke_0_testkern_multi_anyw2_type', the field 'f3' is passed to ker"
"nel 'testkern_multi_anyw2_code' but its function space is not compat"
"ible with the function space specified in the kernel metadata 'any_w"
"2'.\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " ! Check that read-only fields are not modified\n"
- " IF (f1_proxy%vspace%is_readonly()) THEN\n"
- " CALL log_event(\"In alg 'single_invoke_multi_anyw2' invoke '"
+ " end if\n"
+ "\n"
+ " ! Check that read-only fields are not modified\n"
+ " if (f1_proxy%vspace%is_readonly()) then\n"
+ " call log_event(\"In alg 'single_invoke_multi_anyw2' invoke '"
"invoke_0_testkern_multi_anyw2_type', field 'f1' is on a read-only fu"
"nction space but is modified by kernel 'testkern_multi_anyw2_code'."
"\", LOG_LEVEL_ERROR)\n"
- " END IF\n"
- " !\n"
- " ! Initialise number of layers\n")
+ " end if\n"
+ "\n"
+ " ! Initialise number of layers\n")
assert expected2 in generated_code
@@ -4332,15 +4290,15 @@ def test_read_only_fields_hex(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
generated_code = str(psy.gen)
expected = (
- " IF (f2_proxy(1)%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy(1)%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy(2)%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy(2)%halo_exchange(depth=1)\n"
- " END IF\n"
- " IF (f2_proxy(3)%is_dirty(depth=1)) THEN\n"
- " CALL f2_proxy(3)%halo_exchange(depth=1)\n"
- " END IF\n")
+ " if (f2_proxy(1)%is_dirty(depth=1)) then\n"
+ " call f2_proxy(1)%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy(2)%is_dirty(depth=1)) then\n"
+ " call f2_proxy(2)%halo_exchange(depth=1)\n"
+ " end if\n"
+ " if (f2_proxy(3)%is_dirty(depth=1)) then\n"
+ " call f2_proxy(3)%halo_exchange(depth=1)\n"
+ " end if\n")
assert expected in generated_code
@@ -4357,81 +4315,83 @@ def test_mixed_precision_args(tmpdir):
psy = PSyFactory(TEST_API, distributed_memory=True).create(invoke_info)
generated_code = str(psy.gen)
- expected = (
- " USE constants_mod, ONLY: r_tran, r_solver, r_phys, r_def, "
- "r_bl, i_def\n"
- " USE field_mod, ONLY: field_type, field_proxy_type\n"
- " USE r_solver_field_mod, ONLY: r_solver_field_type, "
- "r_solver_field_proxy_type\n"
- " USE r_tran_field_mod, ONLY: r_tran_field_type, "
- "r_tran_field_proxy_type\n"
- " USE r_bl_field_mod, ONLY: r_bl_field_type, "
- "r_bl_field_proxy_type\n"
- " USE r_phys_field_mod, ONLY: r_phys_field_type, "
- "r_phys_field_proxy_type\n"
- " USE operator_mod, ONLY: operator_type, operator_proxy_type\n"
- " USE r_solver_operator_mod, ONLY: r_solver_operator_type, "
- "r_solver_operator_proxy_type\n"
- " USE r_tran_operator_mod, ONLY: r_tran_operator_type, "
- "r_tran_operator_proxy_type\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0(scalar_r_def, field_r_def, operator_r_def, "
- "scalar_r_solver, field_r_solver, operator_r_solver, scalar_r_tran, "
- "field_r_tran, operator_r_tran, scalar_r_bl, field_r_bl, "
- "scalar_r_phys, field_r_phys)\n"
- " USE mixed_kernel_mod, ONLY: mixed_code\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " REAL(KIND=r_def), intent(in) :: scalar_r_def\n"
- " REAL(KIND=r_solver), intent(in) :: scalar_r_solver\n"
- " REAL(KIND=r_tran), intent(in) :: scalar_r_tran\n"
- " REAL(KIND=r_bl), intent(in) :: scalar_r_bl\n"
- " REAL(KIND=r_phys), intent(in) :: scalar_r_phys\n"
- " TYPE(field_type), intent(in) :: field_r_def\n"
- " TYPE(r_solver_field_type), intent(in) :: field_r_solver\n"
- " TYPE(r_tran_field_type), intent(in) :: field_r_tran\n"
- " TYPE(r_bl_field_type), intent(in) :: field_r_bl\n"
- " TYPE(r_phys_field_type), intent(in) :: field_r_phys\n"
- " TYPE(operator_type), intent(in) :: operator_r_def\n"
- " TYPE(r_solver_operator_type), intent(in) :: operator_r_solver\n"
- " TYPE(r_tran_operator_type), intent(in) :: operator_r_tran\n"
- " INTEGER(KIND=i_def) cell\n"
- " INTEGER(KIND=i_def) loop4_start, loop4_stop\n"
- " INTEGER(KIND=i_def) loop3_start, loop3_stop\n"
- " INTEGER(KIND=i_def) loop2_start, loop2_stop\n"
- " INTEGER(KIND=i_def) loop1_start, loop1_stop\n"
- " INTEGER(KIND=i_def) loop0_start, loop0_stop\n"
- " INTEGER(KIND=i_def) nlayers_field_r_bl, nlayers_field_r_def, "
- "nlayers_field_r_phys, nlayers_field_r_solver, nlayers_field_r_tran\n"
- " REAL(KIND=r_tran), pointer, dimension(:,:,:) :: "
- "operator_r_tran_local_stencil => null()\n"
- " TYPE(r_tran_operator_proxy_type) operator_r_tran_proxy\n"
- " REAL(KIND=r_solver), pointer, dimension(:,:,:) :: "
- "operator_r_solver_local_stencil => null()\n"
- " TYPE(r_solver_operator_proxy_type) operator_r_solver_proxy\n"
- " REAL(KIND=r_def), pointer, dimension(:,:,:) :: "
- "operator_r_def_local_stencil => null()\n"
- " TYPE(operator_proxy_type) operator_r_def_proxy\n"
- " REAL(KIND=r_phys), pointer, dimension(:) :: field_r_phys_data "
- "=> null()\n"
- " TYPE(r_phys_field_proxy_type) field_r_phys_proxy\n"
- " REAL(KIND=r_bl), pointer, dimension(:) :: field_r_bl_data => "
- "null()\n"
- " TYPE(r_bl_field_proxy_type) field_r_bl_proxy\n"
- " REAL(KIND=r_tran), pointer, dimension(:) :: field_r_tran_data "
- "=> null()\n"
- " TYPE(r_tran_field_proxy_type) field_r_tran_proxy\n"
- " REAL(KIND=r_solver), pointer, dimension(:) :: "
- "field_r_solver_data => null()\n"
- " TYPE(r_solver_field_proxy_type) field_r_solver_proxy\n"
- " REAL(KIND=r_def), pointer, dimension(:) :: field_r_def_data "
- "=> null()\n"
- " TYPE(field_proxy_type) field_r_def_proxy\n"
- " INTEGER(KIND=i_def), pointer :: map_w3(:,:) => null()\n"
- " INTEGER(KIND=i_def) ndf_w3, undf_w3, ndf_w0\n"
- " INTEGER(KIND=i_def) max_halo_depth_mesh\n"
- " TYPE(mesh_type), pointer :: mesh => null()\n")
- assert expected in generated_code
+ assert "use constants_mod\n" in generated_code
+ assert """
+ use field_mod, only : field_proxy_type, field_type
+ use operator_mod, only : operator_proxy_type, operator_type
+ use r_solver_field_mod, only : r_solver_field_proxy_type, r_solver_field_type
+ use r_solver_operator_mod, only : r_solver_operator_proxy_type, \
+r_solver_operator_type
+ use r_tran_field_mod, only : r_tran_field_proxy_type, r_tran_field_type
+ use r_tran_operator_mod, only : r_tran_operator_proxy_type, \
+r_tran_operator_type
+ use r_bl_field_mod, only : r_bl_field_proxy_type, r_bl_field_type
+ use r_phys_field_mod, only : r_phys_field_proxy_type, r_phys_field_type
+ implicit none
+ public
+
+ contains
+ subroutine invoke_0(scalar_r_def, field_r_def, operator_r_def, \
+scalar_r_solver, field_r_solver, operator_r_solver, scalar_r_tran, \
+field_r_tran, operator_r_tran, scalar_r_bl, field_r_bl, scalar_r_phys, \
+field_r_phys)
+ use mesh_mod, only : mesh_type
+ use mixed_kernel_mod, only : mixed_code
+ real(kind=r_def), intent(in) :: scalar_r_def
+ type(field_type), intent(in) :: field_r_def
+ type(operator_type), intent(in) :: operator_r_def
+ real(kind=r_solver), intent(in) :: scalar_r_solver
+ type(r_solver_field_type), intent(in) :: field_r_solver
+ type(r_solver_operator_type), intent(in) :: operator_r_solver
+ real(kind=r_tran), intent(in) :: scalar_r_tran
+ type(r_tran_field_type), intent(in) :: field_r_tran
+ type(r_tran_operator_type), intent(in) :: operator_r_tran
+ real(kind=r_bl), intent(in) :: scalar_r_bl
+ type(r_bl_field_type), intent(in) :: field_r_bl
+ real(kind=r_phys), intent(in) :: scalar_r_phys
+ type(r_phys_field_type), intent(in) :: field_r_phys
+ integer(kind=i_def) :: cell
+ integer(kind=i_def) :: loop0_start
+ integer(kind=i_def) :: loop0_stop
+ integer(kind=i_def) :: loop1_start
+ integer(kind=i_def) :: loop1_stop
+ integer(kind=i_def) :: loop2_start
+ integer(kind=i_def) :: loop2_stop
+ integer(kind=i_def) :: loop3_start
+ integer(kind=i_def) :: loop3_stop
+ integer(kind=i_def) :: loop4_start
+ integer(kind=i_def) :: loop4_stop
+ type(mesh_type), pointer :: mesh => null()
+ integer(kind=i_def) :: max_halo_depth_mesh
+ real(kind=r_def), pointer, dimension(:) :: field_r_def_data => null()
+ real(kind=r_solver), pointer, dimension(:) :: field_r_solver_data => null()
+ real(kind=r_tran), pointer, dimension(:) :: field_r_tran_data => null()
+ real(kind=r_bl), pointer, dimension(:) :: field_r_bl_data => null()
+ real(kind=r_phys), pointer, dimension(:) :: field_r_phys_data => null()
+ real(kind=r_def), pointer, dimension(:,:,:) :: \
+operator_r_def_local_stencil => null()
+ real(kind=r_solver), pointer, dimension(:,:,:) :: \
+operator_r_solver_local_stencil => null()
+ real(kind=r_tran), pointer, dimension(:,:,:) :: \
+operator_r_tran_local_stencil => null()
+ integer(kind=i_def) :: nlayers_field_r_def
+ integer(kind=i_def) :: nlayers_field_r_solver
+ integer(kind=i_def) :: nlayers_field_r_tran
+ integer(kind=i_def) :: nlayers_field_r_bl
+ integer(kind=i_def) :: nlayers_field_r_phys
+ integer(kind=i_def) :: ndf_w3
+ integer(kind=i_def) :: undf_w3
+ integer(kind=i_def) :: ndf_w0
+ integer(kind=i_def), pointer :: map_w3(:,:) => null()
+ type(field_proxy_type) :: field_r_def_proxy
+ type(r_solver_field_proxy_type) :: field_r_solver_proxy
+ type(r_tran_field_proxy_type) :: field_r_tran_proxy
+ type(r_bl_field_proxy_type) :: field_r_bl_proxy
+ type(r_phys_field_proxy_type) :: field_r_phys_proxy
+ type(operator_proxy_type) :: operator_r_def_proxy
+ type(r_solver_operator_proxy_type) :: operator_r_solver_proxy
+ type(r_tran_operator_proxy_type) :: operator_r_tran_proxy
+""" in generated_code
# Test compilation
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -4452,13 +4412,13 @@ def test_dynpsy_gen_container_routines(tmpdir):
# Search the routine in the code_gen output
generated_code = str(psy.gen)
- searchstring = "SUBROUTINE new_routine("
+ searchstring = "subroutine new_routine("
assert generated_code.count(searchstring) == 1
# Make sure that after a second code generation call the routine is still
# only once in the output
generated_code = str(psy.gen)
- searchstring = "SUBROUTINE new_routine("
+ searchstring = "subroutine new_routine("
assert generated_code.count(searchstring) == 1
# Test compilation
diff --git a/src/psyclone/tests/errors_test.py b/src/psyclone/tests/errors_test.py
index 373fab3274..c67e95a884 100644
--- a/src/psyclone/tests/errors_test.py
+++ b/src/psyclone/tests/errors_test.py
@@ -36,7 +36,6 @@
'''pytest tests for the errors module.'''
-from __future__ import absolute_import
import pytest
from psyclone.errors import LazyString, PSycloneError
diff --git a/src/psyclone/tests/exceptions_test.py b/src/psyclone/tests/exceptions_test.py
index db33751d0e..aca38a537e 100644
--- a/src/psyclone/tests/exceptions_test.py
+++ b/src/psyclone/tests/exceptions_test.py
@@ -35,8 +35,6 @@
''' Test exception classes to ensure consistent __repr__ & __str__ methods. '''
-from __future__ import absolute_import
-
import pkgutil
import inspect
import importlib
diff --git a/src/psyclone/tests/f2pygen_test.py b/src/psyclone/tests/f2pygen_test.py
deleted file mode 100644
index ab6a38f52e..0000000000
--- a/src/psyclone/tests/f2pygen_test.py
+++ /dev/null
@@ -1,1548 +0,0 @@
-# -----------------------------------------------------------------------------
-# BSD 3-Clause License
-#
-# Copyright (c) 2017-2025, Science and Technology Facilities Council
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-#
-# * Redistributions of source code must retain the above copyright notice, this
-# list of conditions and the following disclaimer.
-#
-# * Redistributions in binary form must reproduce the above copyright notice,
-# this list of conditions and the following disclaimer in the documentation
-# and/or other materials provided with the distribution.
-#
-# * Neither the name of the copyright holder nor the names of its
-# contributors may be used to endorse or promote products derived from
-# this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
-# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
-# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
-# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
-# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
-# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
-# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
-# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
-# POSSIBILITY OF SUCH DAMAGE.
-# -----------------------------------------------------------------------------
-# Authors: R. W. Ford, A. R. Porter, S. Siso and N. Nobre, STFC Daresbury Lab
-
-''' Tests for the f2pygen module of PSyclone '''
-
-import pytest
-from psyclone.configuration import Config
-from psyclone.f2pygen import (
- adduse, AssignGen, AllocateGen, BaseGen, CallGen, CharDeclGen, CommentGen,
- DeallocateGen, DeclGen, DirectiveGen, DoGen, IfThenGen, ImplicitNoneGen,
- ModuleGen, PSyIRGen, SelectionGen, SubroutineGen, TypeDeclGen, UseGen)
-from psyclone.errors import GenerationError, InternalError
-from psyclone.psyir.nodes import Node, Return
-from psyclone.tests.utilities import Compile, count_lines, line_number
-
-# Fortran we have to add to some of the generated code in order to
-# perform compilation checks.
-TYPEDECL = '''\
-type :: field_type
- integer :: halo_dirty
-end type field_type
-'''
-
-
-def test_decl_no_replication_scalars():
- '''Check that the same scalar variable will only get declared once in
- a module and a subroutine.
-
- '''
- variable_name = "arg_name"
- for datatype in DeclGen.SUPPORTED_TYPES:
- module = ModuleGen(name="testmodule")
- module.add(DeclGen(module, datatype=datatype,
- entity_decls=[variable_name]))
- module.add(DeclGen(module, datatype=datatype,
- entity_decls=[variable_name]))
- subroutine = SubroutineGen(module, name="testsubroutine")
- module.add(subroutine)
- subroutine.add(DeclGen(subroutine, datatype=datatype,
- entity_decls=[variable_name]))
- subroutine.add(DeclGen(subroutine, datatype=datatype,
- entity_decls=[variable_name]))
- generated_code = str(module.root)
- assert generated_code.count(variable_name) == 2
-
-
-def test_decl_no_replication_types():
- '''Check that the same derived-type variable will only get declared
- once in a module and a subroutine.
-
- '''
- variable_name = "arg_name"
- datatype = "field_type"
- module = ModuleGen(name="testmodule")
- module.add(TypeDeclGen(module, datatype=datatype,
- entity_decls=[variable_name]))
- module.add(TypeDeclGen(module, datatype=datatype,
- entity_decls=[variable_name]))
- subroutine = SubroutineGen(module, name="testsubroutine")
- module.add(subroutine)
- subroutine.add(TypeDeclGen(subroutine, datatype=datatype,
- entity_decls=[variable_name]))
- subroutine.add(TypeDeclGen(subroutine, datatype=datatype,
- entity_decls=[variable_name]))
- generated_code = str(module.root)
- assert generated_code.count(variable_name) == 2
-
-
-def test_decl_no_replication_char():
- '''Check that the character variable will only get declared once in a
- module and a subroutine.
-
- '''
- variable_name = "arg_name"
- module = ModuleGen(name="testmodule")
- module.add(CharDeclGen(module, entity_decls=[variable_name]))
- module.add(CharDeclGen(module, entity_decls=[variable_name]))
- subroutine = SubroutineGen(module, name="testsubroutine")
- module.add(subroutine)
- subroutine.add(CharDeclGen(subroutine, entity_decls=[variable_name]))
- subroutine.add(CharDeclGen(subroutine, entity_decls=[variable_name]))
- generated_code = str(module.root)
- assert generated_code.count(variable_name) == 2
-
-
-def test_subroutine_var_with_implicit_none():
- ''' test that a variable is added after an implicit none
- statement in a subroutine'''
- module = ModuleGen(name="testmodule")
- subroutine = SubroutineGen(module, name="testsubroutine",
- implicitnone=True)
- module.add(subroutine)
- subroutine.add(DeclGen(subroutine, datatype="integer",
- entity_decls=["var1"]))
- idx_var = line_number(subroutine.root, "INTEGER var1")
- idx_imp_none = line_number(subroutine.root, "IMPLICIT NONE")
- print(str(module.root))
- assert idx_var - idx_imp_none == 1, \
- "variable declation must be after implicit none"
-
-
-def test_subroutine_var_intent_in_with_directive():
- ''' test that a variable declared as intent in is added before
- a directive in a subroutine'''
- module = ModuleGen(name="testmodule")
- subroutine = SubroutineGen(module, name="testsubroutine",
- implicitnone=False)
- module.add(subroutine)
- subroutine.add(DirectiveGen(subroutine, "omp", "begin",
- "parallel", ""))
- subroutine.add(DeclGen(subroutine, datatype="integer",
- intent="in", entity_decls=["var1"]))
- idx_par = line_number(subroutine.root, "!$omp parallel")
- idx_var = line_number(subroutine.root, "INTEGER, intent(in) :: var1")
- assert idx_par - idx_var == 1, \
- "variable declaration must be before directive"
-
-
-def test_if():
- ''' Check that an if gets created succesfully. '''
- module = ModuleGen(name="testmodule")
- clause = "a < b"
- fortran_if = IfThenGen(module, clause)
- module.add(fortran_if)
- lines = str(module.root).splitlines()
- assert "IF (" + clause + ") THEN" in lines[3]
- assert "END IF" in lines[4]
-
-
-def test_if_content():
- ''' Check that the content of an if gets created successfully. '''
- module = ModuleGen(name="testmodule")
- clause = "a < b"
- if_statement = IfThenGen(module, clause)
- if_statement.add(CommentGen(if_statement, "HELLO"))
- module.add(if_statement)
- lines = str(module.root).splitlines()
- assert "IF (" + clause + ") THEN" in lines[3]
- assert "!HELLO" in lines[4]
- assert "END IF" in lines[5]
-
-
-def test_if_with_position_before():
- ''' Check that IfThenGen.add() correctly uses the position
- argument if supplied. '''
- module = ModuleGen(name="testmodule")
- clause = "a < b"
- if_statement = IfThenGen(module, clause)
- com1 = CommentGen(if_statement, "HELLO")
- if_statement.add(com1)
- if_statement.add(CommentGen(if_statement, "GOODBYE"),
- position=["before", com1.root])
- module.add(if_statement)
- lines = str(module.root).splitlines()
- assert "IF (" + clause + ") THEN" in lines[3]
- assert "!GOODBYE" in lines[4]
- assert "!HELLO" in lines[5]
- assert "END IF" in lines[6]
-
-
-def test_if_with_position_append():
- ''' Check that IfThenGen.add() correctly uses the position
- argument when *append* is specified. '''
- module = ModuleGen(name="testmodule")
- clause = "a < b"
- if_statement = IfThenGen(module, clause)
- com1 = CommentGen(if_statement, "HELLO")
- if_statement.add(com1)
- if_statement.add(CommentGen(if_statement, "GOODBYE"),
- position=["append"])
- module.add(if_statement)
- print(str(module.root))
- lines = str(module.root).splitlines()
- assert "IF (" + clause + ") THEN" in lines[3]
- assert "!HELLO" in lines[4]
- assert "!GOODBYE" in lines[5]
- assert "END IF" in lines[6]
-
-
-def test_if_add_use():
- ''' Check that IfThenGen.add() correctly handles the case
- when it is passed a UseGen object '''
- module = ModuleGen(name="testmodule")
- clause = "a < b"
- if_statement = IfThenGen(module, clause)
- if_statement.add(CommentGen(if_statement, "GOODBYE"))
- if_statement.add(UseGen(if_statement, name="dibna"))
- module.add(if_statement)
- print(str(module.root))
- use_line = line_number(module.root, "USE dibna")
- if_line = line_number(module.root, "IF (" + clause + ") THEN")
- # The use statement must come before the if..then block
- assert use_line < if_line
-
-
-def test_comment():
- ''' check that a comment gets created succesfully. '''
- module = ModuleGen(name="testmodule")
- content = "HELLO"
- comment = CommentGen(module, content)
- module.add(comment)
- lines = str(module.root).splitlines()
- assert "!" + content in lines[3]
-
-
-def test_add_before():
- ''' add the new code before a particular object '''
- module = ModuleGen(name="testmodule")
- subroutine = SubroutineGen(module, name="testsubroutine")
- module.add(subroutine)
- loop = DoGen(subroutine, "it", "1", "10")
- subroutine.add(loop)
- call = CallGen(subroutine, "testcall")
- subroutine.add(call, position=["before", loop.root])
- lines = str(module.root).splitlines()
- # the call should be inserted before the loop
- print(lines)
- assert "SUBROUTINE testsubroutine" in lines[3]
- assert "CALL testcall" in lines[4]
- assert "DO it=1,10" in lines[5]
-
-
-def test_mod_vanilla():
- ''' Check that we can create a basic, vanilla module '''
- module = ModuleGen()
- lines = str(module.root).splitlines()
- assert "MODULE" in lines[0]
- assert "IMPLICIT NONE" in lines[1]
- assert "CONTAINS" in lines[2]
- assert "END MODULE" in lines[3]
-
-
-def test_mod_name():
- ''' Check that we can create a module with a specified name '''
- name = "test"
- module = ModuleGen(name=name)
- assert "MODULE " + name in str(module.root)
-
-
-def test_mod_no_contains():
- ''' Check that we can switch-off the generation of a CONTAINS
- statement within a module '''
- module = ModuleGen(name="test", contains=False)
- assert "CONTAINS" not in str(module.root)
-
-
-def test_mod_no_implicit_none():
- ''' Check that we can switch off the generation of IMPLICIT NONE
- within a module '''
- module = ModuleGen(name="test", implicitnone=False)
- assert "IMPLICIT NONE" not in str(module.root)
-
-
-def test_invalid_add_raw_subroutine_argument():
- ''' test that an error is thrown if the wrong type of object
- is passed to the add_raw_subroutine method '''
- module = ModuleGen(name="test")
- invalid_type = "string"
- with pytest.raises(Exception):
- module.add_raw_subroutine(invalid_type)
-
-
-def test_allocate_arg_str():
- '''check that an allocate gets created succesfully with content being
- a string.'''
- module = ModuleGen(name="testmodule")
- content = "hello"
- allocate = AllocateGen(module, content)
- module.add(allocate)
- lines = str(module.root).splitlines()
- assert "ALLOCATE (" + content + ")" in lines[3]
-
-
-def test_allocate_mold():
- '''check that an allocate gets created succesfully with a
- mold parameter.'''
- module = ModuleGen(name="testmodule")
- allocate = AllocateGen(module, "hello", mold="abc")
- module.add(allocate)
- lines = str(module.root).splitlines()
- assert "ALLOCATE (hello, mold=abc)" in lines[3]
-
-
-def test_allocate_arg_list():
- '''check that an allocate gets created succesfully with content being
- a list.'''
- module = ModuleGen(name="testmodule")
- content = ["hello", "how", "are", "you"]
- content_str = ""
- for idx, name in enumerate(content):
- content_str += name
- if idx+1 < len(content):
- content_str += ", "
- allocate = AllocateGen(module, content)
- module.add(allocate)
- lines = str(module.root).splitlines()
- assert "ALLOCATE (" + content_str + ")" in lines[3]
-
-
-def test_allocate_incorrect_arg_type():
- '''check that an allocate raises an error if an unknown type is
- passed.'''
- module = ModuleGen(name="testmodule")
- content = 3
- with pytest.raises(RuntimeError):
- _ = AllocateGen(module, content)
-
-
-def test_deallocate_arg_str():
- '''check that a deallocate gets created succesfully with content
- being a str.'''
- module = ModuleGen(name="testmodule")
- content = "goodbye"
- deallocate = DeallocateGen(module, content)
- module.add(deallocate)
- lines = str(module.root).splitlines()
- assert "DEALLOCATE (" + content + ")" in lines[3]
-
-
-def test_deallocate_arg_list():
- '''check that a deallocate gets created succesfully with content
- being a list.'''
- module = ModuleGen(name="testmodule")
- content = ["and", "now", "the", "end", "is", "near"]
- content_str = ""
- for idx, name in enumerate(content):
- content_str += name
- if idx+1 < len(content):
- content_str += ", "
- deallocate = DeallocateGen(module, content)
- module.add(deallocate)
- lines = str(module.root).splitlines()
- assert "DEALLOCATE (" + content_str + ")" in lines[3]
-
-
-def test_deallocate_incorrect_arg_type():
- '''check that a deallocate raises an error if an unknown type is
- passed.'''
- module = ModuleGen(name="testmodule")
- content = 3
- with pytest.raises(RuntimeError):
- _ = DeallocateGen(module, content)
-
-
-def test_imp_none_in_module():
- ''' test that implicit none can be added to a module in the
- correct location'''
- module = ModuleGen(name="testmodule", implicitnone=False)
- module.add(ImplicitNoneGen(module))
- in_idx = line_number(module.root, "IMPLICIT NONE")
- cont_idx = line_number(module.root, "CONTAINS")
- assert in_idx > -1, "IMPLICIT NONE not found"
- assert cont_idx > -1, "CONTAINS not found"
- assert cont_idx - in_idx == 1, "CONTAINS is not on the line after" +\
- " IMPLICIT NONE"
-
-
-def test_imp_none_in_module_with_decs():
- ''' test that implicit none is added before any declaration
- statements in a module when auto (the default) is used for
- insertion '''
- module = ModuleGen(name="testmodule", implicitnone=False)
- module.add(DeclGen(module, datatype="integer",
- entity_decls=["var1"]))
- module.add(TypeDeclGen(module, datatype="my_type",
- entity_decls=["type1"]))
- module.add(ImplicitNoneGen(module))
- in_idx = line_number(module.root, "IMPLICIT NONE")
- assert in_idx == 1
-
-
-def test_imp_none_in_module_with_use_and_decs():
- ''' test that implicit none is added after any use statements
- and before any declarations in a module when auto (the
- default) is used for insertion'''
- module = ModuleGen(name="testmodule", implicitnone=False)
- module.add(DeclGen(module, datatype="integer",
- entity_decls=["var1"]))
- module.add(TypeDeclGen(module, datatype="my_type",
- entity_decls=["type1"]))
- module.add(UseGen(module, "fred"))
- module.add(ImplicitNoneGen(module))
- in_idx = line_number(module.root, "IMPLICIT NONE")
- assert in_idx == 2
-
-
-def test_imp_none_in_module_with_use_and_decs_and_comments():
- ''' test that implicit none is added after any use statements
- and before any declarations in a module in the presence of
- comments when auto (the default) is used for insertion'''
- module = ModuleGen(name="testmodule", implicitnone=False)
- module.add(DeclGen(module, datatype="integer",
- entity_decls=["var1"]))
- module.add(TypeDeclGen(module, datatype="my_type",
- entity_decls=["type1"]))
- module.add(UseGen(module, "fred"))
- for idx in [0, 1, 2, 3]:
- module.add(CommentGen(module, " hello "+str(idx)),
- position=["before_index", 2*idx])
- module.add(ImplicitNoneGen(module))
- in_idx = line_number(module.root, "IMPLICIT NONE")
- assert in_idx == 3
-
-
-def test_imp_none_in_module_already_exists():
- ''' test that implicit none is not added to a module when one
- already exists'''
- module = ModuleGen(name="testmodule", implicitnone=True)
- module.add(ImplicitNoneGen(module))
- count = count_lines(module.root, "IMPLICIT NONE")
- print(str(module.root))
- assert count == 1, \
- "There should only be one instance of IMPLICIT NONE"
-
-
-def test_imp_none_in_subroutine():
- ''' test that implicit none can be added to a subroutine '''
- module = ModuleGen(name="testmodule")
- subroutine = SubroutineGen(module, name="testsubroutine")
- module.add(subroutine)
- subroutine.add(ImplicitNoneGen(subroutine))
- assert 'IMPLICIT NONE' in str(subroutine.root)
-
-
-def test_imp_none_in_subroutine_with_decs():
- ''' test that implicit none is added before any declaration
- statements in a subroutine when auto (the default) is used for
- insertion '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=["var1"]))
- sub.add(TypeDeclGen(sub, datatype="my_type",
- entity_decls=["type1"]))
- sub.add(ImplicitNoneGen(module))
- in_idx = line_number(sub.root, "IMPLICIT NONE")
- assert in_idx == 1
-
-
-def test_imp_none_in_subroutine_with_use_and_decs():
- ''' test that implicit none is added after any use statements
- and before any declarations in a subroutine when auto (the
- default) is used for insertion'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=["var1"]))
- sub.add(TypeDeclGen(sub, datatype="my_type",
- entity_decls=["type1"]))
- sub.add(UseGen(sub, "fred"))
- sub.add(ImplicitNoneGen(sub))
- in_idx = line_number(sub.root, "IMPLICIT NONE")
- assert in_idx == 2
-
-
-def test_imp_none_in_subroutine_with_use_and_decs_and_comments():
- ''' test that implicit none is added after any use statements
- and before any declarations in a subroutine in the presence of
- comments when auto (the default) is used for insertion'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=["var1"]))
- sub.add(TypeDeclGen(sub, datatype="my_type",
- entity_decls=["type1"]))
- sub.add(UseGen(sub, "fred"))
- for idx in [0, 1, 2, 3]:
- sub.add(CommentGen(sub, " hello "+str(idx)),
- position=["before_index", 2*idx])
- sub.add(ImplicitNoneGen(sub))
- in_idx = line_number(sub.root, "IMPLICIT NONE")
- assert in_idx == 3
-
-
-def test_imp_none_in_subroutine_already_exists():
- ''' test that implicit none is not added to a subroutine when
- one already exists'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine", implicitnone=True)
- module.add(sub)
- sub.add(ImplicitNoneGen(sub))
- count = count_lines(sub.root, "IMPLICIT NONE")
- assert count == 1, \
- "There should only be one instance of IMPLICIT NONE"
-
-
-def test_imp_none_exception_if_wrong_parent():
- ''' test that an exception is thrown if implicit none is added
- and the parent is not a module or a subroutine '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- dogen = DoGen(sub, "i", "1", "10")
- sub.add(dogen)
- with pytest.raises(Exception):
- dogen.add(ImplicitNoneGen(dogen))
-
-
-def test_subgen_implicit_none_false():
- ''' test that implicit none is not added to the subroutine if
- not requested '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine", implicitnone=False)
- module.add(sub)
- count = count_lines(sub.root, "IMPLICIT NONE")
- assert count == 0, "IMPLICIT NONE SHOULD NOT EXIST"
-
-
-def test_subgen_implicit_none_true():
- ''' test that implicit none is added to the subroutine if
- requested '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine", implicitnone=True)
- module.add(sub)
- count = count_lines(sub.root, "IMPLICIT NONE")
- assert count == 1, "IMPLICIT NONE SHOULD EXIST"
-
-
-def test_subgen_implicit_none_default():
- ''' test that implicit none is not added to the subroutine by
- default '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- count = count_lines(sub.root, "IMPLICIT NONE")
- assert count == 0, "IMPLICIT NONE SHOULD NOT EXIST BY DEFAULT"
-
-
-def test_subgen_args():
- ''' Test that the args property works as expected '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine",
- args=["arg1", "arg2"])
- my_args = sub.args
- assert len(my_args) == 2
-
-
-def test_directive_wrong_type():
- ''' Check that we raise an error if we request a Directive of
- unrecognised type '''
- parent = Node()
- with pytest.raises(RuntimeError) as err:
- _ = DirectiveGen(parent,
- "some_dir_type", "begin", "do",
- "schedule(static)")
- assert "unsupported directive language" in str(err.value)
-
-
-def test_ompdirective_wrong():
- ''' Check that we raise an error if we request an OMP Directive of
- unrecognised type '''
- parent = Node()
- with pytest.raises(RuntimeError) as err:
- _ = DirectiveGen(parent,
- "omp", "begin", "dosomething",
- "schedule(static)")
- assert "unrecognised directive type" in str(err.value)
-
-
-def test_ompdirective_wrong_posn():
- ''' Check that we raise an error if we request an OMP Directive with
- an invalid position '''
- parent = Node()
- with pytest.raises(RuntimeError) as err:
- _ = DirectiveGen(parent,
- "omp", "start", "do",
- "schedule(static)")
- assert "unrecognised position 'start'" in str(err.value)
-
-
-def test_ompdirective_type():
- ''' Check that we can query the type of an OMP Directive '''
- parent = Node()
- dirgen = DirectiveGen(parent,
- "omp", "begin", "do",
- "schedule(static)")
- ompdir = dirgen.root
- assert ompdir.type == "do"
-
-
-def test_basegen_add_auto():
- ''' Check that attempting to call add on BaseGen raises an error if
- position is "auto"'''
- parent = Node()
- bgen = BaseGen(parent, parent)
- obj = Node()
- with pytest.raises(Exception) as err:
- bgen.add(obj, position=['auto'])
- assert "auto option must be implemented by the sub" in str(err.value)
-
-
-def test_basegen_add_invalid_posn():
- '''Check that attempting to call add on BaseGen with an invalid
- position argument raises an error'''
- parent = Node()
- bgen = BaseGen(parent, parent)
- obj = Node()
- with pytest.raises(Exception) as err:
- bgen.add(obj, position=['wrong'])
- assert "supported positions are ['append', 'first'" in str(err.value)
-
-
-def test_basegen_append():
- '''Check that we can append an object to the tree'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=["var1"]))
- sub.add(CommentGen(sub, " hello"), position=["append"])
- cindex = line_number(sub.root, "hello")
- assert cindex == 3
-
-
-def test_basegen_append_default():
- ''' Check if no position argument is supplied to BaseGen.add() then it
- defaults to appending '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- BaseGen.add(sub, DeclGen(sub, datatype="integer",
- entity_decls=["var1"]))
- BaseGen.add(sub, CommentGen(sub, " hello"))
- cindex = line_number(sub.root, "hello")
- assert cindex == 3
-
-
-def test_basegen_first():
- '''Check that we can insert an object as the first child'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=["var1"]))
- sub.add(CommentGen(sub, " hello"), position=["first"])
- cindex = line_number(sub.root, "hello")
- assert cindex == 1
-
-
-def test_basegen_after_index():
- '''Check that we can insert an object using "after_index"'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=["var1"]))
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=["var2"]))
- sub.add(CommentGen(sub, " hello"), position=["after_index", 1])
- # The code checked by line_number() *includes* the SUBROUTINE
- # statement (which is obviously not a child of the SubroutineGen
- # object) and therefore the index it returns is 1 greater than we
- # might expect.
- assert line_number(sub.root, "hello") == 3
-
-
-def test_basegen_before_error():
- '''Check that we raise an error when attempting to insert an object
- before another object that is not present in the tree'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=["var1"]))
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=["var2"]))
- # Create an object but do not add it as a child of sub
- dgen = DeclGen(sub, datatype="real",
- entity_decls=["rvar1"])
- # Try to add an object before the orphan dgen
- with pytest.raises(RuntimeError) as err:
- sub.add(CommentGen(sub, " hello"), position=["before", dgen])
- assert "Failed to find supplied object" in str(err.value)
-
-
-def test_basegen_last_declaration_no_vars():
- '''Check that we raise an error when requesting the position of the
- last variable declaration if we don't have any variables'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- # Request the position of the last variable declaration
- # even though we haven't got any
- with pytest.raises(RuntimeError) as err:
- sub.last_declaration()
- assert "no variable declarations found" in str(err.value)
-
-
-def test_basegen_start_parent_loop_dbg(capsys):
- '''Check the debug option to the start_parent_loop method'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- loop = DoGen(sub, "it", "1", "10")
- sub.add(loop)
- call = CallGen(loop, "testcall")
- loop.add(call)
- call.start_parent_loop(debug=True)
- out, _ = capsys.readouterr()
- print(out)
- expected = ("Parent is a do loop so moving to the parent\n"
- "The type of the current node is now \n"
- "The type of parent is \n"
- "Finding the loops position in its parent ...\n"
- "The loop's index is 0\n")
- assert expected in out
-
-
-def test_basegen_start_parent_loop_not_first_child_dbg(capsys):
- '''Check the debug option to the start_parent_loop method when the loop
- is not the first child of the subroutine'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- call0 = CallGen(sub, "testcall")
- sub.add(call0)
- loop = DoGen(sub, "it", "1", "10")
- sub.add(loop)
- call = CallGen(loop, "testcall")
- loop.add(call)
- call.start_parent_loop(debug=True)
- out, _ = capsys.readouterr()
- print(out)
- expected = ("Parent is a do loop so moving to the parent\n"
- "The type of the current node is now \n"
- "The type of parent is \n"
- "Finding the loops position in its parent ...\n"
- "The loop's index is 1\n")
- assert expected in out
-
-
-def test_basegen_start_parent_loop_omp_begin_dbg(capsys):
- '''Check the debug option to the start_parent_loop method when we have
- an OpenMP begin directive'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- dgen = DirectiveGen(sub, "omp", "begin", "do", "schedule(static)")
- sub.add(dgen)
- loop = DoGen(sub, "it", "1", "10")
- sub.add(loop)
- call = CallGen(loop, "testcall")
- loop.add(call)
- call.start_parent_loop(debug=True)
- out, _ = capsys.readouterr()
- print(out)
- expected = ("Parent is a do loop so moving to the parent\n"
- "The type of the current node is now \n"
- "The type of parent is \n"
- "Finding the loops position in its parent ...\n"
- "The loop's index is 1\n"
- "The type of the object at the index is \n"
- "If preceding node is a directive then move back one\n"
- "preceding node is a directive so find out what type ...\n")
- assert expected in out
-
-
-def test_basegen_start_parent_loop_omp_end_dbg(capsys):
- '''Check the debug option to the start_parent_loop method when we have
- an OpenMP end directive'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- dgen = DirectiveGen(sub, "omp", "end", "do", "")
- sub.add(dgen)
- loop = DoGen(sub, "it", "1", "10")
- sub.add(loop)
- call = CallGen(loop, "testcall")
- loop.add(call)
- call.start_parent_loop(debug=True)
- out, _ = capsys.readouterr()
- print(out)
- expected = ("Parent is a do loop so moving to the parent\n"
- "The type of the current node is now \n"
- "The type of parent is \n"
- "Finding the loops position in its parent ...\n"
- "The loop's index is 1\n"
- "The type of the object at the index is \n"
- "If preceding node is a directive then move back one\n"
- "preceding node is a directive so find out what type ...\n")
-
- assert expected in out
-
-
-def test_basegen_start_parent_loop_no_loop_dbg():
- '''Check the debug option to the start_parent_loop method when we have
- no loop'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- dgen = DirectiveGen(sub, "omp", "end", "do", "")
- sub.add(dgen)
- call = CallGen(sub, name="testcall", args=["a", "b"])
- sub.add(call)
- with pytest.raises(RuntimeError) as err:
- call.start_parent_loop(debug=True)
- assert "This node has no enclosing Do loop" in str(err.value)
-
-
-def test_progunitgen_multiple_generic_use():
- '''Check that we correctly handle the case where duplicate use statements
- are added'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(UseGen(sub, name="fred"))
- sub.add(UseGen(sub, name="fred"))
- assert count_lines(sub.root, "USE fred") == 1
-
-
-def test_progunitgen_multiple_use1():
- '''Check that we correctly handle the case where duplicate use statements
- are added but one is specific'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(UseGen(sub, name="fred"))
- sub.add(UseGen(sub, name="fred", only=True, funcnames=["astaire"]))
- assert count_lines(sub.root, "USE fred") == 1
-
-
-def test_progunitgen_multiple_use2():
- '''Check that we correctly handle the case where the same module
- appears in two use statements but, because the first use is
- specific, the second, generic use is included.
-
- '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(UseGen(sub, name="fred", only=True, funcnames=["astaire"]))
- sub.add(UseGen(sub, name="fred"))
- assert count_lines(sub.root, "USE fred") == 2
-
-
-def test_progunit_multiple_use3():
- '''Check that we correctly handle the case where the same module is
- specified in two UseGen objects statements both of which are
- specific and they have overlapping variable names.
-
- '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- funcnames = ["a", "b", "c"]
- sub.add(UseGen(sub, name="fred", only=True, funcnames=funcnames))
- funcnames = ["c", "d"]
- sub.add(UseGen(sub, name="fred", only=True, funcnames=funcnames))
- gen = str(sub.root)
- expected = (
- " USE fred, ONLY: d\n"
- " USE fred, ONLY: a, b, c")
- assert expected in gen
- assert count_lines(sub.root, "USE fred") == 2
- # ensure that the input list does not get modified
- assert funcnames == ["c", "d"]
-
-
-def test_adduse_empty_only():
- ''' Test that the adduse module method works correctly when we specify
- that we want it to be specific but then don't provide a list of
- entities for the only qualifier '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- # Add a use statement with only=True but an empty list of entities
- adduse("fred", sub.root, only=True, funcnames=[])
- assert count_lines(sub.root, "USE fred") == 1
- assert count_lines(sub.root, "USE fred, only") == 0
-
-
-def test_adduse():
- ''' Test that the adduse module method works correctly when we use a
- call object as our starting point '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- call = CallGen(sub, name="testcall", args=["a", "b"])
- sub.add(call)
- adduse("fred", call.root, only=True, funcnames=["astaire"])
- gen = str(sub.root)
- expected = (" SUBROUTINE testsubroutine()\n"
- " USE fred, ONLY: astaire\n")
- assert expected in gen
-
-
-def test_adduse_default_funcnames():
- ''' Test that the adduse module method works correctly when we do
- not specify a list of funcnames '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- call = CallGen(sub, name="testcall", args=["a", "b"])
- sub.add(call)
- adduse("fred", call.root)
- gen = str(sub.root)
- expected = (" SUBROUTINE testsubroutine()\n"
- " USE fred\n")
- assert expected in gen
-
-
-def test_basedecl_errors():
- ''' Check that the BaseDeclGen class raises the correct errors if
- invalid combinations are requested. '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- with pytest.raises(RuntimeError) as err:
- sub.add(DeclGen(sub, datatype="integer", allocatable=True,
- entity_decls=["my_int"], initial_values=["1"]))
- assert ("Cannot specify initial values for variable(s) [\'my_int\'] "
- "because they have the \'allocatable\' attribute"
- in str(err.value))
- with pytest.raises(NotImplementedError) as err:
- sub.add(DeclGen(sub, datatype="integer", dimension="10",
- entity_decls=["my_int"], initial_values=["1"]))
- assert ("Specifying initial values for array declarations is not "
- "currently supported" in str(err.value))
- with pytest.raises(RuntimeError) as err:
- sub.add(DeclGen(sub, datatype="integer", intent="iN",
- entity_decls=["my_int"], initial_values=["1"]))
- assert ("Cannot assign (initial) values to variable(s) [\'my_int\'] as "
- "they have INTENT(in)" in str(err.value))
-
-
-def test_decl_logical(tmpdir):
- ''' Check that we can create a declaration for a logical variable '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(DeclGen(sub, datatype="logical", entity_decls=["first_time"]))
- gen = str(sub.root).lower()
- assert "logical first_time" in gen
- # Add a second logical variable. Note that "first_time" will be ignored
- # since it has already been declared.
- sub.add(DeclGen(sub, datatype="logical", entity_decls=["first_time",
- "var2"]))
- gen = str(sub.root).lower()
- assert "logical var2" in gen
- assert gen.count("logical first_time") == 1
- # Check that the generated code compiles (if enabled)
- assert Compile(tmpdir).string_compiles(gen)
-
-
-def test_decl_char(tmpdir):
- ''' Check that we can create a declaration for a character variable '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sub.add(CharDeclGen(sub, entity_decls=["my_string"]))
- # This time specifying a length
- sub.add(CharDeclGen(sub, length="28",
- entity_decls=["my_string2"]))
- # This time specifying a length and an initial value
- sub.add(CharDeclGen(sub, length="28",
- entity_decls=["my_string3"],
- initial_values=["\'this is a string\'"]))
- gen = str(sub.root).lower()
- assert "character my_string" in gen
- assert "character(len=28) my_string2" in gen
- assert "character(len=28) :: my_string3='this is a string'" in gen
- # Check that the generated Fortran compiles (if compilation testing is
- # enabled)
- assert Compile(tmpdir).string_compiles(gen)
- # Finally, check initialisation using a variable name. Since this
- # variable isn't declared, we can't include it in the compilation test.
- sub.add(CharDeclGen(sub, length="my_len",
- entity_decls=["my_string4"],
- initial_values=["some_variable"]))
- gen = str(sub.root).lower()
- assert "character(len=my_len) :: my_string4=some_variable" in gen
-
-
-def test_decl_save(tmpdir):
- ''' Check that we can declare variables with the save attribute '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- for idx, dtype in enumerate(DeclGen.SUPPORTED_TYPES):
- sub.add(DeclGen(sub, datatype=dtype, save=True,
- entity_decls=["var"+str(idx)]))
- sub.add(CharDeclGen(sub, save=True, length="10",
- entity_decls=["varchar"]))
- sub.add(TypeDeclGen(sub, save=True, datatype="field_type",
- entity_decls=["ufld"]))
- gen = str(module.root).lower()
- for dtype in DeclGen.SUPPORTED_TYPES:
- assert f"{dtype.lower()}, save :: var" in gen
- assert "character(len=10), save :: varchar" in gen
- assert "type(field_type), save :: ufld" in gen
- # Check that the generated code compiles (if enabled). We have to
- # manually add a declaration for "field_type".
- parts = gen.split("implicit none")
- gen = parts[0] + "implicit none\n" + TYPEDECL + parts[1]
- assert Compile(tmpdir).string_compiles(gen)
-
-
-def test_decl_target(tmpdir):
- ''' Check that we can declare variables with the target attribute '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- for idx, dtype in enumerate(DeclGen.SUPPORTED_TYPES):
- sub.add(DeclGen(sub, datatype=dtype, target=True,
- entity_decls=["var"+str(idx)]))
- sub.add(CharDeclGen(sub, target=True, length="10",
- entity_decls=["varchar"]))
- sub.add(TypeDeclGen(sub, target=True, datatype="field_type",
- entity_decls=["ufld"]))
- gen = str(module.root).lower()
- for dtype in DeclGen.SUPPORTED_TYPES:
- assert f"{dtype.lower()}, target :: var" in gen
- assert "character(len=10), target :: varchar" in gen
- assert "type(field_type), target :: ufld" in gen
- # Check that the generated code compiles (if enabled). We
- # must manually add a definition for the derived type.
- parts = gen.split("implicit none")
- gen = parts[0] + "implicit none\n" + TYPEDECL + parts[1]
- assert Compile(tmpdir).string_compiles(gen)
-
-
-def test_decl_private(tmpdir):
- ''' Check that we can declare variables with the 'private' attribute. '''
- module = ModuleGen(name="testmodule")
- for idx, dtype in enumerate(DeclGen.SUPPORTED_TYPES):
- module.add(DeclGen(module, datatype=dtype, private=True,
- entity_decls=["var"+str(idx)]))
- module.add(CharDeclGen(module, private=True, length="10",
- entity_decls=["varchar"]))
- module.add(TypeDeclGen(module, private=True, datatype="field_type",
- entity_decls=["ufld"]))
- gen = str(module.root).lower()
- for dtype in DeclGen.SUPPORTED_TYPES:
- assert f"{dtype.lower()}, private :: var" in gen
- assert "character(len=10), private :: varchar" in gen
- assert "type(field_type), private :: ufld" in gen
- # Check that the generated code compiles (if enabled). We
- # must manually add a definition for the derived type.
- parts = gen.split("implicit none")
- gen = parts[0] + "implicit none\n" + TYPEDECL + parts[1]
- assert Compile(tmpdir).string_compiles(gen)
-
-
-def test_decl_initial_vals(tmpdir):
- ''' Check that we can specify initial values for a declaration '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- # Check that we raise the correct error if the wrong number of
- # initial values is supplied
- with pytest.raises(RuntimeError) as err:
- sub.add(DeclGen(sub, datatype="real", entity_decls=["r1", "r2"],
- initial_values=["1.0"]))
- assert ("number of initial values supplied (1) does not match the number "
- "of variables to be declared (2: ['r1', 'r2'])" in str(err.value))
-
- # Single variables
- sub.add(DeclGen(sub, datatype="integer", save=True,
- entity_decls=["ivar"], initial_values=["1"]))
- sub.add(DeclGen(sub, datatype="real", save=True,
- entity_decls=["var"], initial_values=["1.0"]))
- sub.add(DeclGen(sub, datatype="logical", save=True,
- entity_decls=["lvar"], initial_values=[".false."]))
- gen = str(sub.root).lower()
- assert "logical, save :: lvar=.false." in gen
- assert "integer, save :: ivar=1" in gen
- assert "real, save :: var=1.0" in gen
- # Check that the generated code compiles (if enabled)
- _compile = Compile(tmpdir)
- assert _compile.string_compiles(gen)
-
- # Multiple variables
- sub.add(DeclGen(sub, datatype="integer", save=True,
- entity_decls=["ivar1", "ivar2"],
- initial_values=["1", "2"]))
- sub.add(DeclGen(sub, datatype="real", save=True,
- entity_decls=["var1", "var2"],
- initial_values=["1.0", "-1.0"]))
- sub.add(DeclGen(sub, datatype="logical", save=True,
- entity_decls=["lvar1", "lvar2"],
- initial_values=[".false.", ".true."]))
- gen = str(sub.root).lower()
- assert "logical, save :: lvar1=.false., lvar2=.true." in gen
- assert "integer, save :: ivar1=1, ivar2=2" in gen
- assert "real, save :: var1=1.0, var2=-1.0" in gen
- # Check that the generated code compiles (if enabled)
- assert _compile.string_compiles(gen)
-
-
-def test_declgen_invalid_vals():
- ''' Check that we raise the expected error if we attempt to create a
- DeclGen with an initial value that is inconsistent with the type of
- the variable '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- with pytest.raises(RuntimeError) as err:
- _ = DeclGen(sub, datatype="integer",
- entity_decls=["ival1", "ival2", "ival3"],
- initial_values=["good", "1", "-0.35"])
- assert ("Initial value of '-0.35' for an integer "
- "variable is invalid or unsupported" in str(err.value))
- with pytest.raises(RuntimeError) as err:
- _ = DeclGen(sub, datatype="real",
- entity_decls=["val1", "val2", "val3"],
- initial_values=["good", "1.0", "35"])
- assert ("Initial value of '35' for a real "
- "variable is invalid or unsupported" in str(err.value))
- with pytest.raises(RuntimeError) as err:
- _ = DeclGen(sub, datatype="logical",
- entity_decls=["val1", "val2", "val3"],
- initial_values=["good", ".fAlse.", "35"])
- assert ("Initial value of '35' for a logical variable is invalid or "
- "unsupported" in str(err.value))
- with pytest.raises(RuntimeError) as err:
- CharDeclGen(sub, entity_decls=["val1", "val2"],
- initial_values=["good", ".fAlse."])
- assert ("Initial value of \'.fAlse.' for a character variable"
- in str(err.value))
- with pytest.raises(RuntimeError) as err:
- CharDeclGen(sub, entity_decls=["val1"], initial_values=["35"])
- assert "Initial value of \'35\' for a character variable" in str(err.value)
-
-
-def test_declgen_wrong_type(monkeypatch):
- ''' Check that we raise an appropriate error if we attempt to create
- a DeclGen for an unsupported type '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- with pytest.raises(RuntimeError) as err:
- _ = DeclGen(sub, datatype="complex",
- entity_decls=["rvar1"])
- assert ("Only ['integer', 'real', 'logical'] types are "
- "currently supported" in str(err.value))
- # Check the internal error is raised within the validation routine if
- # an unsupported type is specified
- dgen = DeclGen(sub, datatype="integer", entity_decls=["my_int"])
- with pytest.raises(InternalError) as err:
- dgen._check_initial_values("complex", ["1"])
- assert (f"internal error: unsupported type 'complex' - should be one "
- f"of {dgen.SUPPORTED_TYPES}" in str(err.value))
- # Check that we get an internal error if the supplied type is in the
- # list of those supported but has not actually been implemented.
- # We have to monkeypatch the list of supported types...
- monkeypatch.setattr(DeclGen, "SUPPORTED_TYPES", value=["complex"])
- with pytest.raises(InternalError) as err:
- _ = DeclGen(sub, datatype="complex",
- entity_decls=["rvar1"])
- assert ("internal error: Type 'complex' is in DeclGen.SUPPORTED_TYPES "
- "but not handled by constructor" in str(err.value))
-
-
-def test_declgen_missing_names():
- ''' Check that we raise an error if we attempt to create a DeclGen
- without naming the variable(s) '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- with pytest.raises(RuntimeError) as err:
- _ = DeclGen(sub, datatype="integer")
- assert ("Cannot create a variable declaration without specifying "
- "the name" in str(err.value))
-
-
-def test_typedeclgen_names():
- ''' Check that the names method of TypeDeclGen works as expected '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- dgen = TypeDeclGen(sub, datatype="my_type",
- entity_decls=["type1"])
- sub.add(dgen)
- names = dgen.names
- assert len(names) == 1
- assert names[0] == "type1"
-
-
-def test_typedeclgen_missing_names():
- ''' Check that we raise an error if we attempt to create TypeDeclGen
- without naming the variables '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- with pytest.raises(RuntimeError) as err:
- _ = TypeDeclGen(sub, datatype="my_type")
- assert ("Cannot create a variable declaration without specifying"
- in str(err.value))
-
-
-def test_typedeclgen_values_error():
- ''' Check that we reject attempts to create a TypeDeclGen with
- initial values. '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- decl = TypeDeclGen(sub, datatype="my_type", entity_decls=["field1"])
- with pytest.raises(InternalError) as err:
- decl._check_initial_values("my_type", ["1.0"])
- assert ("This method should not have been called because initial values "
- "for derived-type declarations are not supported"
- in str(err.value))
-
-
-def test_typedeclgen_multiple_use():
- '''Check that we correctly handle the case where data of the same type
- has already been declared. '''
-
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- # first declaration
- datanames = ["type1"]
- sub.add(TypeDeclGen(sub, datatype="my_type",
- entity_decls=datanames))
- gen = str(sub.root)
- # second declaration
- datanames = ["type1", "type2"]
- sub.add(TypeDeclGen(sub, datatype="my_type",
- entity_decls=datanames))
- gen = str(sub.root)
- print(gen)
- expected = (
- " TYPE(my_type) type2\n"
- " TYPE(my_type) type1")
- assert expected in gen
- # check input data is not modified
- assert datanames == ["type1", "type2"]
-
-
-def test_typedeclgen_multiple_use2():
- '''Check that we do not correctly handle the case where data of a
- different type with the same name has already been declared.'''
-
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- # first declaration
- datanames = ["type1"]
- sub.add(TypeDeclGen(sub, datatype="my_type",
- entity_decls=datanames))
- gen = str(sub.root)
- # second declaration
- datanames = ["type1", "type2"]
- sub.add(TypeDeclGen(sub, datatype="my_type2",
- entity_decls=datanames))
- gen = str(sub.root)
- print(gen)
- expected = (
- " TYPE(my_type2) type1, type2\n"
- " TYPE(my_type) type1")
- assert expected in gen
- # check input data is not modified
- assert datanames == ["type1", "type2"]
-
-
-def test_declgen_multiple_use():
- '''Check that we correctly handle the case where data of the same type
- has already been delared.'''
-
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- # first declaration
- datanames = ["i1"]
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=datanames))
- gen = str(sub.root)
- # second declaration
- datanames = ["i1", "i2"]
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=datanames))
- gen = str(sub.root)
- print(gen)
- expected = (
- " INTEGER i2\n"
- " INTEGER i1")
- assert expected in gen
- # check input data is not modified
- assert datanames == ["i1", "i2"]
-
-
-def test_declgen_multiple_use2():
- '''Check that we don't correctly handle the case where data of a
- different type has already been delared. '''
-
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- # first declaration
- datanames = ["data1"]
- sub.add(DeclGen(sub, datatype="real",
- entity_decls=datanames))
- gen = str(sub.root)
- # second declaration
- datanames = ["data1", "data2"]
- sub.add(DeclGen(sub, datatype="integer",
- entity_decls=datanames))
- gen = str(sub.root)
- print(gen)
- expected = (
- " INTEGER data1, data2\n"
- " REAL data1")
- assert expected in gen
- # check input data is not modified
- assert datanames == ["data1", "data2"]
-
-
-@pytest.mark.xfail(reason="No way to add body of DEFAULT clause")
-def test_selectiongen():
- ''' Check that SelectionGen works as expected '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sgen = SelectionGen(sub, expr="my_var")
- sub.add(sgen)
- agen = AssignGen(sgen, lhs="happy", rhs=".TRUE.")
- sgen.addcase("1", [agen])
- # TODO how do we specify what happens in the default case?
- sgen.adddefault()
- gen = str(sub.root)
- print(gen)
- expected = ("SELECT CASE ( my_var )\n"
- "CASE ( 1 )\n"
- " happy = .TRUE.\n"
- "CASE DEFAULT\n"
- " END SELECT")
- assert expected in gen
- assert False
-
-
-def test_selectiongen_addcase():
- ''' Check that SelectionGen.addcase() works as expected when no
- content is supplied'''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sgen = SelectionGen(sub, expr="my_var")
- sub.add(sgen)
- sgen.addcase("1")
- gen = str(sub.root)
- print(gen)
- expected = (" SELECT CASE ( my_var )\n"
- " CASE ( 1 )\n"
- " END SELECT")
- assert expected in gen
-
-
-@pytest.mark.xfail(reason="Adding a CASE to a SELECT TYPE does not work")
-def test_typeselectiongen():
- ''' Check that SelectionGen works as expected for a type '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- sgen = SelectionGen(sub, expr="my_var=>another_var", typeselect=True)
- sub.add(sgen)
- agen = AssignGen(sgen, lhs="happy", rhs=".TRUE.")
- sgen.addcase("fspace", [agen])
- sgen.adddefault()
- gen = str(sub.root)
- print(gen)
- assert "SELECT TYPE ( my_var=>another_var )" in gen
- assert "TYPE IS ( fspace )" in gen
-
-
-def test_modulegen_add_wrong_parent():
- ''' Check that attempting to add an object to a ModuleGen fails
- if the object's parent is not that ModuleGen '''
- module = ModuleGen(name="testmodule")
- module_wrong = ModuleGen(name="another_module")
- sub = SubroutineGen(module_wrong, name="testsubroutine")
- with pytest.raises(RuntimeError) as err:
- module.add(sub)
- assert ("because it is not a descendant of it or of any of"
- in str(err.value))
-
-
-def test_do_loop_with_increment():
- ''' Test that we correctly generate code for a do loop with
- non-unit increment '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsub")
- module.add(sub)
- dogen = DoGen(sub, "it", "1", "10", step="2")
- sub.add(dogen)
- count = count_lines(sub.root, "DO it=1,10,2")
- assert count == 1
-
-
-def test_do_loop_add_after():
- ''' Test that we correctly generate code for a do loop when adding a
- child to it with position *after* '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsub")
- module.add(sub)
- dogen = DoGen(sub, "it", "1", "10", step="2")
- sub.add(dogen)
- assign1 = AssignGen(dogen, lhs="happy", rhs=".TRUE.")
- dogen.add(assign1)
- assign2 = AssignGen(dogen, lhs="sad", rhs=".FALSE.")
- dogen.add(assign2, position=["before", assign1.root])
- a1_line = line_number(sub.root, "happy = ")
- a2_line = line_number(sub.root, "sad = ")
- assert a1_line > a2_line
-
-
-def test_basegen_previous_loop_no_loop():
- '''Check that we raise an error when requesting the position of the
- previous loop if we don't have a loop '''
- module = ModuleGen(name="testmodule")
- sub = SubroutineGen(module, name="testsubroutine")
- module.add(sub)
- # Request the position of the last loop
- # even though we haven't got one
- with pytest.raises(RuntimeError) as err:
- sub.previous_loop()
- assert "no loop found - there is no previous loop" in str(err.value)
-
-
-def test_psyirgen_node():
- '''Check that the PSyIRGen prints the content of the provided PSyIR
- node inside the f2pygen node.
- '''
- module = ModuleGen(name="testmodule")
- subroutine = SubroutineGen(module, name="testsubroutine")
- module.add(subroutine)
-
- # Now add a PSyIR node inside the f2pygen tree
- node = Return()
- subroutine.add(PSyIRGen(subroutine, node))
-
- generated_code = str(module.root)
- expected = '''\
- MODULE testmodule
- IMPLICIT NONE
- CONTAINS
- SUBROUTINE testsubroutine()
- RETURN
- END SUBROUTINE testsubroutine
- END MODULE testmodule'''
-
- assert generated_code == expected
-
-
-def test_psyirgen_multiple_fparser_nodes():
- '''Check that the PSyIRGen prints the content of the provided PSyIR
- node inside the f2pygen node when the PSyIR node maps to more than
- one fparser nodes.
- '''
- module = ModuleGen(name="testmodule")
- subroutine = SubroutineGen(module, name="testsubroutine")
- module.add(subroutine)
-
- # Create single PSyIR node that maps to 2 fparser nodes: a comment
- # statement and a return statement.
- node = Return()
- node.preceding_comment = "Comment statement"
-
- subroutine.add(PSyIRGen(subroutine, node))
-
- generated_code = str(module.root)
- expected = '''\
- MODULE testmodule
- IMPLICIT NONE
- CONTAINS
- SUBROUTINE testsubroutine()
- ! Comment statement
- RETURN
- END SUBROUTINE testsubroutine
- END MODULE testmodule'''
-
- assert generated_code == expected
-
-
-def test_psyirgen_backendchecks(monkeypatch):
- '''Check that PSyIRGen uses the configuration object to determine
- whether or not to disable checks in the PSyIR backend.
- '''
- config = Config.get()
-
- module = ModuleGen(name="testmodule")
- subroutine = SubroutineGen(module, name="testsubroutine")
- module.add(subroutine)
- node = Return()
-
- # monkeypatch the `validate_global_constraints` method of the Return node
- # so that it always raises an error.
- def fake_validate():
- raise GenerationError("This is just a test")
-
- monkeypatch.setattr(node, "validate_global_constraints", fake_validate)
-
- # monkeypatch Config to turn off validation checks.
- monkeypatch.setattr(config, "_backend_checks_enabled", False)
- # Constructing the PSyIRGen node should succed.
- pgen = PSyIRGen(subroutine, node)
- assert isinstance(pgen, PSyIRGen)
- # monkeypatch Config to turn on validation checks.
- monkeypatch.setattr(config, "_backend_checks_enabled", True)
- # Construction should now fail.
- with pytest.raises(GenerationError) as err:
- PSyIRGen(subroutine, node)
- assert "This is just a test" in str(err.value)
diff --git a/src/psyclone/tests/gen_kernel_stub_test.py b/src/psyclone/tests/gen_kernel_stub_test.py
index 9365b8de3e..b3da33caf4 100644
--- a/src/psyclone/tests/gen_kernel_stub_test.py
+++ b/src/psyclone/tests/gen_kernel_stub_test.py
@@ -39,8 +39,6 @@
import os
import pytest
-import fparser
-
from psyclone.errors import GenerationError
from psyclone.gen_kernel_stub import generate
from psyclone.parse.algorithm import ParseError
@@ -72,6 +70,6 @@ def test_gen_success():
''' Test for successful completion of the generate() function. '''
base_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),
"test_files", "dynamo0p3")
- ptree = generate(os.path.join(base_path, "testkern_mod.F90"),
- api="lfric")
- assert isinstance(ptree, fparser.one.block_statements.Module)
+ stub_string = generate(os.path.join(base_path, "testkern_mod.F90"),
+ api="lfric")
+ assert isinstance(stub_string, str)
diff --git a/src/psyclone/tests/generator_test.py b/src/psyclone/tests/generator_test.py
index ed6184b6c3..95567b2ff2 100644
--- a/src/psyclone/tests/generator_test.py
+++ b/src/psyclone/tests/generator_test.py
@@ -424,7 +424,7 @@ def test_no_script_gocean():
os.path.join(BASE_PATH, "gocean1p0", "single_invoke.f90"),
api="gocean")
assert "program single_invoke_test" in alg
- assert "MODULE psy_single_invoke_test" in str(psy)
+ assert "module psy_single_invoke_test" in str(psy)
def test_script_gocean(script_factory):
@@ -1016,7 +1016,7 @@ def test_generate_trans_error(tmpdir, capsys, monkeypatch):
"contains\n"
"subroutine setval_c()\n"
" use psyclone_builtins\n"
- " use constants_mod, only: r_def\n"
+ " use constants_mod\n"
" use field_mod, only : field_type\n"
" type(field_type) :: field\n"
" real(kind=r_def) :: value\n"
@@ -1580,7 +1580,7 @@ def test_generate_unresolved_container_lfric(invoke, tmpdir, monkeypatch):
f" use testkern_mod, only: testkern_type\n"
f"end subroutine dummy_kernel\n"
f"subroutine some_kernel()\n"
- f" use constants_mod, only: r_def\n"
+ f" use constants_mod\n"
f" use field_mod, only : field_type\n"
f" type(field_type) :: field1, field2, field3, field4\n"
f" real(kind=r_def) :: scalar\n"
diff --git a/src/psyclone/tests/gocean1p0_config_test.py b/src/psyclone/tests/gocean1p0_config_test.py
index c55e072bea..b42cdc700e 100644
--- a/src/psyclone/tests/gocean1p0_config_test.py
+++ b/src/psyclone/tests/gocean1p0_config_test.py
@@ -357,28 +357,28 @@ def test_valid_config_files():
gen = str(psy.gen)
new_loop1 = '''\
- DO j = 1, 2, 1
- DO i = 3, 4, 1
- CALL compute_kern1_code(i, j, cu_fld%data, p_fld%data, u_fld%data)
- END DO
- END DO'''
+ do j = 1, 2, 1
+ do i = 3, 4, 1
+ call compute_kern1_code(i, j, cu_fld%data, p_fld%data, u_fld%data)
+ enddo
+ enddo'''
assert new_loop1 in gen
new_loop2 = '''\
- DO j = 2, jstop, 1
- DO i = 1, istop + 1, 1
- CALL compute_kern2_code(i, j, cu_fld%data, p_fld%data, u_fld%data)
- END DO
- END DO'''
+ do j = 2, jstop, 1
+ do i = 1, istop + 1, 1
+ call compute_kern2_code(i, j, cu_fld%data, p_fld%data, u_fld%data)
+ enddo
+ enddo'''
assert new_loop2 in gen
# The third kernel tests {start} and {stop}
new_loop3 = '''\
- DO j = 2 - 2, 1, 1
- DO i = istop, istop + 1, 1
- CALL compute_kern3_code(i, j, cu_fld%data, p_fld%data, u_fld%data)
- END DO
- END DO'''
+ do j = 2 - 2, 1, 1
+ do i = istop, istop + 1, 1
+ call compute_kern3_code(i, j, cu_fld%data, p_fld%data, u_fld%data)
+ enddo
+ enddo'''
assert new_loop3 in gen
# Note that this file can not be compiled, since the new iteration space
diff --git a/src/psyclone/tests/gocean1p0_stencil_test.py b/src/psyclone/tests/gocean1p0_stencil_test.py
index 978da7c157..18481193fb 100644
--- a/src/psyclone/tests/gocean1p0_stencil_test.py
+++ b/src/psyclone/tests/gocean1p0_stencil_test.py
@@ -38,7 +38,6 @@
'''Stencil tests for PSy-layer code generation that are specific to the
GOcean 1.0 API.'''
-from __future__ import absolute_import
import os
import pytest
diff --git a/src/psyclone/tests/gocean1p0_test.py b/src/psyclone/tests/gocean1p0_test.py
index a04e7a25df..ff9f5276b2 100644
--- a/src/psyclone/tests/gocean1p0_test.py
+++ b/src/psyclone/tests/gocean1p0_test.py
@@ -79,33 +79,34 @@ def test_field(tmpdir, dist_mem):
generated_code = str(psy.gen)
before_kernel = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_compute_cu(cu_fld, p_fld, u_fld)\n"
- " USE compute_cu_mod, ONLY: compute_cu_code\n"
- " TYPE(r2d_field), intent(inout) :: cu_fld\n"
- " TYPE(r2d_field), intent(inout) :: p_fld\n"
- " TYPE(r2d_field), intent(inout) :: u_fld\n"
- " INTEGER j\n"
- " INTEGER i\n\n")
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_compute_cu(cu_fld, p_fld, u_fld)\n"
+ " use compute_cu_mod, only : compute_cu_code\n"
+ " type(r2d_field), intent(inout) :: cu_fld\n"
+ " type(r2d_field), intent(inout) :: p_fld\n"
+ " type(r2d_field), intent(inout) :: u_fld\n"
+ " integer :: j\n"
+ " integer :: i\n\n")
remaining_code = (
- " DO j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
- " DO i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1\n"
- " CALL compute_cu_code(i, j, cu_fld%data, p_fld%data, "
+ " do j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
+ " do i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1\n"
+ " call compute_cu_code(i, j, cu_fld%data, p_fld%data, "
"u_fld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_compute_cu\n"
- " END MODULE psy_single_invoke_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_compute_cu\n\n"
+ "end module psy_single_invoke_test\n")
if dist_mem:
# Fields that read and have a stencil access insert a halo exchange,
# in this case p_fld is a stencil but u_fld is pointwise.
halo_exchange_code = (
- " CALL p_fld%halo_exchange(1)\n")
+ " call p_fld%halo_exchange(1)\n")
expected_output = before_kernel + halo_exchange_code + remaining_code
else:
expected_output = before_kernel + remaining_code
@@ -127,42 +128,43 @@ def test_two_kernels(tmpdir, dist_mem):
generated_code = psy.gen
before_kernels = (
- " MODULE psy_single_invoke_two_kernels\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0(cu_fld, p_fld, u_fld, unew_fld, "
+ "module psy_single_invoke_two_kernels\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0(cu_fld, p_fld, u_fld, unew_fld, "
"uold_fld)\n"
- " USE compute_cu_mod, ONLY: compute_cu_code\n"
- " USE time_smooth_mod, ONLY: time_smooth_code\n"
- " TYPE(r2d_field), intent(inout) :: cu_fld\n"
- " TYPE(r2d_field), intent(inout) :: p_fld\n"
- " TYPE(r2d_field), intent(inout) :: u_fld\n"
- " TYPE(r2d_field), intent(inout) :: unew_fld\n"
- " TYPE(r2d_field), intent(inout) :: uold_fld\n"
- " INTEGER j\n"
- " INTEGER i\n\n")
+ " use compute_cu_mod, only : compute_cu_code\n"
+ " use time_smooth_mod, only : time_smooth_code\n"
+ " type(r2d_field), intent(inout) :: cu_fld\n"
+ " type(r2d_field), intent(inout) :: p_fld\n"
+ " type(r2d_field), intent(inout) :: u_fld\n"
+ " type(r2d_field), intent(inout) :: unew_fld\n"
+ " type(r2d_field), intent(inout) :: uold_fld\n"
+ " integer :: j\n"
+ " integer :: i\n\n")
first_kernel = (
- " DO j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
- " DO i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1\n"
- " CALL compute_cu_code(i, j, cu_fld%data, p_fld%data, "
+ " do j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
+ " do i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1\n"
+ " call compute_cu_code(i, j, cu_fld%data, p_fld%data, "
"u_fld%data)\n"
- " END DO\n"
- " END DO\n")
+ " enddo\n"
+ " enddo\n")
second_kernel = (
- " DO j = 1, SIZE(uold_fld%data, 2), 1\n"
- " DO i = 1, SIZE(uold_fld%data, 1), 1\n"
- " CALL time_smooth_code(i, j, u_fld%data, unew_fld%data, "
+ " do j = 1, SIZE(uold_fld%data, 2), 1\n"
+ " do i = 1, SIZE(uold_fld%data, 1), 1\n"
+ " call time_smooth_code(i, j, u_fld%data, unew_fld%data, "
"uold_fld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0\n"
- " END MODULE psy_single_invoke_two_kernels")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0\n\n"
+ "end module psy_single_invoke_two_kernels\n")
if dist_mem:
# In this case the second kernel just has pointwise accesses, so it
# doesn't add any halo exchange.
- halos_first_kernel = " CALL p_fld%halo_exchange(1)\n"
+ halos_first_kernel = " call p_fld%halo_exchange(1)\n"
expected_output = before_kernels + halos_first_kernel + first_kernel \
+ second_kernel
else:
@@ -182,41 +184,42 @@ def test_two_kernels_with_dependencies(tmpdir, dist_mem):
generated_code = psy.gen
before_kernels = (
- " MODULE psy_single_invoke_two_kernels\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0(cu_fld, p_fld, u_fld)\n"
- " USE compute_cu_mod, ONLY: compute_cu_code\n"
- " TYPE(r2d_field), intent(inout) :: cu_fld\n"
- " TYPE(r2d_field), intent(inout) :: p_fld\n"
- " TYPE(r2d_field), intent(inout) :: u_fld\n"
- " INTEGER j\n"
- " INTEGER i\n\n")
+ "module psy_single_invoke_two_kernels\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0(cu_fld, p_fld, u_fld)\n"
+ " use compute_cu_mod, only : compute_cu_code\n"
+ " type(r2d_field), intent(inout) :: cu_fld\n"
+ " type(r2d_field), intent(inout) :: p_fld\n"
+ " type(r2d_field), intent(inout) :: u_fld\n"
+ " integer :: j\n"
+ " integer :: i\n\n")
first_kernel = (
- " DO j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
- " DO i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1\n"
- " CALL compute_cu_code(i, j, cu_fld%data, p_fld%data, "
+ " do j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
+ " do i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1\n"
+ " call compute_cu_code(i, j, cu_fld%data, p_fld%data, "
"u_fld%data)\n"
- " END DO\n"
- " END DO\n")
+ " enddo\n"
+ " enddo\n")
second_kernel = (
- " DO j = p_fld%internal%ystart, p_fld%internal%ystop, 1\n"
- " DO i = p_fld%internal%xstart, p_fld%internal%xstop, 1\n"
- " CALL compute_cu_code(i, j, p_fld%data, cu_fld%data,"
+ " do j = p_fld%internal%ystart, p_fld%internal%ystop, 1\n"
+ " do i = p_fld%internal%xstart, p_fld%internal%xstop, 1\n"
+ " call compute_cu_code(i, j, p_fld%data, cu_fld%data,"
" u_fld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0\n"
- " END MODULE psy_single_invoke_two_kernels")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0\n\n"
+ "end module psy_single_invoke_two_kernels\n")
if dist_mem:
# In this case the second kernel just has a RaW dependency on the
# cu_fld of the first kernel, so a halo exchange should be inserted
# between the kernels in addition to the initial p_fld halo exchange.
- halos_first_kernel = " CALL p_fld%halo_exchange(1)\n"
- halos_second_kernel = " CALL cu_fld%halo_exchange(1)\n"
+ halos_first_kernel = " call p_fld%halo_exchange(1)\n"
+ halos_second_kernel = " call cu_fld%halo_exchange(1)\n"
expected_output = before_kernels + halos_first_kernel + first_kernel \
+ halos_second_kernel + second_kernel
else:
@@ -239,40 +242,41 @@ def test_grid_property(tmpdir, dist_mem):
generated_code = str(psy.gen)
before_kernels = (
- " MODULE psy_single_invoke_with_grid_props_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0(cu_fld, u_fld, du_fld, d_fld)\n"
- " USE kernel_requires_grid_props, ONLY: next_sshu_code\n"
- " TYPE(r2d_field), intent(inout) :: cu_fld\n"
- " TYPE(r2d_field), intent(inout) :: u_fld\n"
- " TYPE(r2d_field), intent(inout) :: du_fld\n"
- " TYPE(r2d_field), intent(inout) :: d_fld\n"
- " INTEGER j\n"
- " INTEGER i\n\n")
+ "module psy_single_invoke_with_grid_props_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0(cu_fld, u_fld, du_fld, d_fld)\n"
+ " use kernel_requires_grid_props, only : next_sshu_code\n"
+ " type(r2d_field), intent(inout) :: cu_fld\n"
+ " type(r2d_field), intent(inout) :: u_fld\n"
+ " type(r2d_field), intent(inout) :: du_fld\n"
+ " type(r2d_field), intent(inout) :: d_fld\n"
+ " integer :: j\n"
+ " integer :: i\n\n")
first_kernel = (
- " DO j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
- " DO i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1\n"
- " CALL next_sshu_code(i, j, cu_fld%data, u_fld%data, "
+ " do j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1\n"
+ " do i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1\n"
+ " call next_sshu_code(i, j, cu_fld%data, u_fld%data, "
"u_fld%grid%tmask, u_fld%grid%area_t, u_fld%grid%area_u)\n"
- " END DO\n"
- " END DO\n")
+ " enddo\n"
+ " enddo\n")
second_kernel = (
- " DO j = du_fld%internal%ystart, du_fld%internal%ystop, 1\n"
- " DO i = du_fld%internal%xstart, du_fld%internal%xstop, 1\n"
- " CALL next_sshu_code(i, j, du_fld%data, d_fld%data, "
+ " do j = du_fld%internal%ystart, du_fld%internal%ystop, 1\n"
+ " do i = du_fld%internal%xstart, du_fld%internal%xstop, 1\n"
+ " call next_sshu_code(i, j, du_fld%data, d_fld%data, "
"d_fld%grid%tmask, d_fld%grid%area_t, d_fld%grid%area_u)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0\n"
- " END MODULE psy_single_invoke_with_grid_props_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0\n\n"
+ "end module psy_single_invoke_with_grid_props_test\n")
if dist_mem:
# Grid properties do not insert halo exchanges, in this case
# only the u_fld and d_fld have read stencil accesses.
- halos_first_kernel = " CALL u_fld%halo_exchange(1)\n"
- halos_second_kernel = " CALL d_fld%halo_exchange(1)\n"
+ halos_first_kernel = " call u_fld%halo_exchange(1)\n"
+ halos_second_kernel = " call d_fld%halo_exchange(1)\n"
expected_output = before_kernels + halos_first_kernel + first_kernel \
+ halos_second_kernel + second_kernel
else:
@@ -295,26 +299,27 @@ def test_scalar_int_arg(tmpdir, dist_mem):
generated_code = str(psy.gen)
before_kernels = (
- " MODULE psy_single_invoke_scalar_int_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_bc_ssh(ncycle, ssh_fld)\n"
- " USE kernel_scalar_int, ONLY: bc_ssh_code\n"
- " INTEGER, intent(inout) :: ncycle\n"
- " TYPE(r2d_field), intent(inout) :: ssh_fld\n"
- " INTEGER j\n"
- " INTEGER i\n\n")
+ "module psy_single_invoke_scalar_int_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_bc_ssh(ncycle, ssh_fld)\n"
+ " use kernel_scalar_int, only : bc_ssh_code\n"
+ " integer, intent(inout) :: ncycle\n"
+ " type(r2d_field), intent(inout) :: ssh_fld\n"
+ " integer :: j\n"
+ " integer :: i\n\n")
first_kernel = (
- " DO j = ssh_fld%whole%ystart, ssh_fld%whole%ystop, 1\n"
- " DO i = ssh_fld%whole%xstart, ssh_fld%whole%xstop, 1\n"
- " CALL bc_ssh_code(i, j, ncycle, ssh_fld%data, "
+ " do j = ssh_fld%whole%ystart, ssh_fld%whole%ystop, 1\n"
+ " do i = ssh_fld%whole%xstart, ssh_fld%whole%xstop, 1\n"
+ " call bc_ssh_code(i, j, ncycle, ssh_fld%data, "
"ssh_fld%grid%tmask)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_bc_ssh\n"
- " END MODULE psy_single_invoke_scalar_int_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_bc_ssh\n\n"
+ "end module psy_single_invoke_scalar_int_test\n")
# Scalar arguments do not insert halo exchanges in the distributed memory,
# in this case, since the only field has pointwise access, there are no
@@ -338,26 +343,27 @@ def test_scalar_float_arg(tmpdir, dist_mem):
generated_code = str(psy.gen)
before_kernel = (
- " MODULE psy_single_invoke_scalar_float_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_bc_ssh(a_scalar, ssh_fld)\n"
- " USE kernel_scalar_float, ONLY: bc_ssh_code\n"
- " REAL(KIND=go_wp), intent(inout) :: a_scalar\n"
- " TYPE(r2d_field), intent(inout) :: ssh_fld\n"
- " INTEGER j\n"
- " INTEGER i\n\n")
+ "module psy_single_invoke_scalar_float_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_bc_ssh(a_scalar, ssh_fld)\n"
+ " use kernel_scalar_float, only : bc_ssh_code\n"
+ " real(kind=go_wp), intent(inout) :: a_scalar\n"
+ " type(r2d_field), intent(inout) :: ssh_fld\n"
+ " integer :: j\n"
+ " integer :: i\n\n")
first_kernel = (
- " DO j = ssh_fld%whole%ystart, ssh_fld%whole%ystop, 1\n"
- " DO i = ssh_fld%whole%xstart, ssh_fld%whole%xstop, 1\n"
- " CALL bc_ssh_code(i, j, a_scalar, ssh_fld%data, "
+ " do j = ssh_fld%whole%ystart, ssh_fld%whole%ystop, 1\n"
+ " do i = ssh_fld%whole%xstart, ssh_fld%whole%xstop, 1\n"
+ " call bc_ssh_code(i, j, a_scalar, ssh_fld%data, "
"ssh_fld%grid%subdomain%internal%xstop, ssh_fld%grid%tmask)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_bc_ssh\n"
- " END MODULE psy_single_invoke_scalar_float_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_bc_ssh\n\n"
+ "end module psy_single_invoke_scalar_float_test\n")
# Scalar arguments do not insert halo exchanges in the distributed memory,
# in this case, since the only field has pointwise access, there are no
@@ -395,30 +401,31 @@ def test_scalar_float_arg_from_module():
# declaration.
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_scalar_float_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_bc_ssh(ssh_fld)\n"
- " USE my_mod, ONLY: a_scalar\n"
- " USE kernel_scalar_float, ONLY: bc_ssh_code\n"
- " TYPE(r2d_field), intent(inout) :: ssh_fld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = ssh_fld%grid%subdomain%internal%xstop\n"
- " jstop = ssh_fld%grid%subdomain%internal%ystop\n"
- " DO j = 1, jstop + 1, 1\n"
- " DO i = 1, istop + 1, 1\n"
- " CALL bc_ssh_code(i, j, a_scalar, ssh_fld%data, "
+ "module psy_single_invoke_scalar_float_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_bc_ssh(ssh_fld)\n"
+ " use my_mod, only : a_scalar\n"
+ " use kernel_scalar_float, only : bc_ssh_code\n"
+ " type(r2d_field), intent(inout) :: ssh_fld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = ssh_fld%grid%subdomain%internal%xstop\n"
+ " jstop = ssh_fld%grid%subdomain%internal%ystop\n"
+ " do j = 1, jstop + 1, 1\n"
+ " do i = 1, istop + 1, 1\n"
+ " call bc_ssh_code(i, j, a_scalar, ssh_fld%data, "
"ssh_fld%grid%subdomain%internal%xstop, ssh_fld%grid%tmask)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_bc_ssh\n"
- " END MODULE psy_single_invoke_scalar_float_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_bc_ssh\n\n"
+ "end module psy_single_invoke_scalar_float_test\n")
assert generated_code == expected_output
# We don't compile this generated code as the module is made up and
@@ -444,32 +451,33 @@ def test_ne_offset_cf_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_compute_vort(vort_fld, p_fld, u_fld, v_fld)\n"
- " USE kernel_ne_offset_cf_mod, ONLY: compute_vort_code\n"
- " TYPE(r2d_field), intent(inout) :: vort_fld\n"
- " TYPE(r2d_field), intent(inout) :: p_fld\n"
- " TYPE(r2d_field), intent(inout) :: u_fld\n"
- " TYPE(r2d_field), intent(inout) :: v_fld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = vort_fld%grid%subdomain%internal%xstop\n"
- " jstop = vort_fld%grid%subdomain%internal%ystop\n"
- " DO j = 1, jstop - 1, 1\n"
- " DO i = 1, istop - 1, 1\n"
- " CALL compute_vort_code(i, j, vort_fld%data, p_fld%data, "
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_compute_vort(vort_fld, p_fld, u_fld, v_fld)\n"
+ " use kernel_ne_offset_cf_mod, only : compute_vort_code\n"
+ " type(r2d_field), intent(inout) :: vort_fld\n"
+ " type(r2d_field), intent(inout) :: p_fld\n"
+ " type(r2d_field), intent(inout) :: u_fld\n"
+ " type(r2d_field), intent(inout) :: v_fld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = vort_fld%grid%subdomain%internal%xstop\n"
+ " jstop = vort_fld%grid%subdomain%internal%ystop\n"
+ " do j = 1, jstop - 1, 1\n"
+ " do i = 1, istop - 1, 1\n"
+ " call compute_vort_code(i, j, vort_fld%data, p_fld%data, "
"u_fld%data, v_fld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_compute_vort\n"
- " END MODULE psy_single_invoke_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_compute_vort\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -494,31 +502,32 @@ def test_ne_offset_ct_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_compute_vort(p_fld, u_fld, v_fld)\n"
- " USE kernel_ne_offset_ct_mod, ONLY: compute_vort_code\n"
- " TYPE(r2d_field), intent(inout) :: p_fld\n"
- " TYPE(r2d_field), intent(inout) :: u_fld\n"
- " TYPE(r2d_field), intent(inout) :: v_fld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = p_fld%grid%subdomain%internal%xstop\n"
- " jstop = p_fld%grid%subdomain%internal%ystop\n"
- " DO j = 2, jstop, 1\n"
- " DO i = 2, istop, 1\n"
- " CALL compute_vort_code(i, j, p_fld%data, u_fld%data, "
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_compute_vort(p_fld, u_fld, v_fld)\n"
+ " use kernel_ne_offset_ct_mod, only : compute_vort_code\n"
+ " type(r2d_field), intent(inout) :: p_fld\n"
+ " type(r2d_field), intent(inout) :: u_fld\n"
+ " type(r2d_field), intent(inout) :: v_fld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = p_fld%grid%subdomain%internal%xstop\n"
+ " jstop = p_fld%grid%subdomain%internal%ystop\n"
+ " do j = 2, jstop, 1\n"
+ " do i = 2, istop, 1\n"
+ " call compute_vort_code(i, j, p_fld%data, u_fld%data, "
"v_fld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_compute_vort\n"
- " END MODULE psy_single_invoke_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_compute_vort\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -543,28 +552,29 @@ def test_ne_offset_all_cu_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_bc_solid_u(u_fld)\n"
- " USE boundary_conditions_ne_offset_mod, ONLY: bc_solid_u_code\n"
- " TYPE(r2d_field), intent(inout) :: u_fld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = u_fld%grid%subdomain%internal%xstop\n"
- " jstop = u_fld%grid%subdomain%internal%ystop\n"
- " DO j = 1, jstop + 1, 1\n"
- " DO i = 1, istop, 1\n"
- " CALL bc_solid_u_code(i, j, u_fld%data, u_fld%grid%tmask)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_bc_solid_u\n"
- " END MODULE psy_single_invoke_test")
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_bc_solid_u(u_fld)\n"
+ " use boundary_conditions_ne_offset_mod, only : bc_solid_u_code\n"
+ " type(r2d_field), intent(inout) :: u_fld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = u_fld%grid%subdomain%internal%xstop\n"
+ " jstop = u_fld%grid%subdomain%internal%ystop\n"
+ " do j = 1, jstop + 1, 1\n"
+ " do i = 1, istop, 1\n"
+ " call bc_solid_u_code(i, j, u_fld%data, u_fld%grid%tmask)\n"
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_bc_solid_u\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -589,28 +599,29 @@ def test_ne_offset_all_cv_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_bc_solid_v(v_fld)\n"
- " USE boundary_conditions_ne_offset_mod, ONLY: bc_solid_v_code\n"
- " TYPE(r2d_field), intent(inout) :: v_fld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = v_fld%grid%subdomain%internal%xstop\n"
- " jstop = v_fld%grid%subdomain%internal%ystop\n"
- " DO j = 1, jstop, 1\n"
- " DO i = 1, istop + 1, 1\n"
- " CALL bc_solid_v_code(i, j, v_fld%data, v_fld%grid%tmask)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_bc_solid_v\n"
- " END MODULE psy_single_invoke_test")
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_bc_solid_v(v_fld)\n"
+ " use boundary_conditions_ne_offset_mod, only : bc_solid_v_code\n"
+ " type(r2d_field), intent(inout) :: v_fld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = v_fld%grid%subdomain%internal%xstop\n"
+ " jstop = v_fld%grid%subdomain%internal%ystop\n"
+ " do j = 1, jstop, 1\n"
+ " do i = 1, istop + 1, 1\n"
+ " call bc_solid_v_code(i, j, v_fld%data, v_fld%grid%tmask)\n"
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_bc_solid_v\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -635,28 +646,29 @@ def test_ne_offset_all_cf_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_bc_solid_f(f_fld)\n"
- " USE boundary_conditions_ne_offset_mod, ONLY: bc_solid_f_code\n"
- " TYPE(r2d_field), intent(inout) :: f_fld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = f_fld%grid%subdomain%internal%xstop\n"
- " jstop = f_fld%grid%subdomain%internal%ystop\n"
- " DO j = 1, jstop, 1\n"
- " DO i = 1, istop, 1\n"
- " CALL bc_solid_f_code(i, j, f_fld%data, f_fld%grid%tmask)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_bc_solid_f\n"
- " END MODULE psy_single_invoke_test")
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_bc_solid_f(f_fld)\n"
+ " use boundary_conditions_ne_offset_mod, only : bc_solid_f_code\n"
+ " type(r2d_field), intent(inout) :: f_fld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = f_fld%grid%subdomain%internal%xstop\n"
+ " jstop = f_fld%grid%subdomain%internal%ystop\n"
+ " do j = 1, jstop, 1\n"
+ " do i = 1, istop, 1\n"
+ " call bc_solid_f_code(i, j, f_fld%data, f_fld%grid%tmask)\n"
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_bc_solid_f\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -679,32 +691,33 @@ def test_sw_offset_cf_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_compute_z(z_fld, p_fld, u_fld, v_fld)\n"
- " USE kernel_sw_offset_cf_mod, ONLY: compute_z_code\n"
- " TYPE(r2d_field), intent(inout) :: z_fld\n"
- " TYPE(r2d_field), intent(inout) :: p_fld\n"
- " TYPE(r2d_field), intent(inout) :: u_fld\n"
- " TYPE(r2d_field), intent(inout) :: v_fld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = z_fld%grid%subdomain%internal%xstop\n"
- " jstop = z_fld%grid%subdomain%internal%ystop\n"
- " DO j = 2, jstop + 1, 1\n"
- " DO i = 2, istop + 1, 1\n"
- " CALL compute_z_code(i, j, z_fld%data, p_fld%data, "
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_compute_z(z_fld, p_fld, u_fld, v_fld)\n"
+ " use kernel_sw_offset_cf_mod, only : compute_z_code\n"
+ " type(r2d_field), intent(inout) :: z_fld\n"
+ " type(r2d_field), intent(inout) :: p_fld\n"
+ " type(r2d_field), intent(inout) :: u_fld\n"
+ " type(r2d_field), intent(inout) :: v_fld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = z_fld%grid%subdomain%internal%xstop\n"
+ " jstop = z_fld%grid%subdomain%internal%ystop\n"
+ " do j = 2, jstop + 1, 1\n"
+ " do i = 2, istop + 1, 1\n"
+ " call compute_z_code(i, j, z_fld%data, p_fld%data, "
"u_fld%data, v_fld%data, p_fld%grid%dx, p_fld%grid%dy)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_compute_z\n"
- " END MODULE psy_single_invoke_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_compute_z\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -729,32 +742,33 @@ def test_sw_offset_all_cf_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_apply_bcs_f(z_fld, p_fld, u_fld, v_fld)\n"
- " USE kernel_sw_offset_cf_mod, ONLY: apply_bcs_f_code\n"
- " TYPE(r2d_field), intent(inout) :: z_fld\n"
- " TYPE(r2d_field), intent(inout) :: p_fld\n"
- " TYPE(r2d_field), intent(inout) :: u_fld\n"
- " TYPE(r2d_field), intent(inout) :: v_fld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = z_fld%grid%subdomain%internal%xstop\n"
- " jstop = z_fld%grid%subdomain%internal%ystop\n"
- " DO j = 1, jstop + 1, 1\n"
- " DO i = 1, istop + 1, 1\n"
- " CALL apply_bcs_f_code(i, j, z_fld%data, p_fld%data, "
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_apply_bcs_f(z_fld, p_fld, u_fld, v_fld)\n"
+ " use kernel_sw_offset_cf_mod, only : apply_bcs_f_code\n"
+ " type(r2d_field), intent(inout) :: z_fld\n"
+ " type(r2d_field), intent(inout) :: p_fld\n"
+ " type(r2d_field), intent(inout) :: u_fld\n"
+ " type(r2d_field), intent(inout) :: v_fld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = z_fld%grid%subdomain%internal%xstop\n"
+ " jstop = z_fld%grid%subdomain%internal%ystop\n"
+ " do j = 1, jstop + 1, 1\n"
+ " do i = 1, istop + 1, 1\n"
+ " call apply_bcs_f_code(i, j, z_fld%data, p_fld%data, "
"u_fld%data, v_fld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_apply_bcs_f\n"
- " END MODULE psy_single_invoke_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_apply_bcs_f\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -779,32 +793,33 @@ def test_sw_offset_ct_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_compute_h(h_fld, p_fld, u_fld, v_fld)\n"
- " USE kernel_sw_offset_ct_mod, ONLY: compute_h_code\n"
- " TYPE(r2d_field), intent(inout) :: h_fld\n"
- " TYPE(r2d_field), intent(inout) :: p_fld\n"
- " TYPE(r2d_field), intent(inout) :: u_fld\n"
- " TYPE(r2d_field), intent(inout) :: v_fld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = h_fld%grid%subdomain%internal%xstop\n"
- " jstop = h_fld%grid%subdomain%internal%ystop\n"
- " DO j = 2, jstop, 1\n"
- " DO i = 2, istop, 1\n"
- " CALL compute_h_code(i, j, h_fld%data, p_fld%data, "
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_compute_h(h_fld, p_fld, u_fld, v_fld)\n"
+ " use kernel_sw_offset_ct_mod, only : compute_h_code\n"
+ " type(r2d_field), intent(inout) :: h_fld\n"
+ " type(r2d_field), intent(inout) :: p_fld\n"
+ " type(r2d_field), intent(inout) :: u_fld\n"
+ " type(r2d_field), intent(inout) :: v_fld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = h_fld%grid%subdomain%internal%xstop\n"
+ " jstop = h_fld%grid%subdomain%internal%ystop\n"
+ " do j = 2, jstop, 1\n"
+ " do i = 2, istop, 1\n"
+ " call compute_h_code(i, j, h_fld%data, p_fld%data, "
"u_fld%data, v_fld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_compute_h\n"
- " END MODULE psy_single_invoke_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_compute_h\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -830,32 +845,33 @@ def test_sw_offset_all_ct_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_apply_bcs_h(hfld, pfld, ufld, vfld)\n"
- " USE kernel_sw_offset_ct_mod, ONLY: apply_bcs_h_code\n"
- " TYPE(r2d_field), intent(inout) :: hfld\n"
- " TYPE(r2d_field), intent(inout) :: pfld\n"
- " TYPE(r2d_field), intent(inout) :: ufld\n"
- " TYPE(r2d_field), intent(inout) :: vfld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = hfld%grid%subdomain%internal%xstop\n"
- " jstop = hfld%grid%subdomain%internal%ystop\n"
- " DO j = 1, jstop + 1, 1\n"
- " DO i = 1, istop + 1, 1\n"
- " CALL apply_bcs_h_code(i, j, hfld%data, pfld%data, "
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_apply_bcs_h(hfld, pfld, ufld, vfld)\n"
+ " use kernel_sw_offset_ct_mod, only : apply_bcs_h_code\n"
+ " type(r2d_field), intent(inout) :: hfld\n"
+ " type(r2d_field), intent(inout) :: pfld\n"
+ " type(r2d_field), intent(inout) :: ufld\n"
+ " type(r2d_field), intent(inout) :: vfld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = hfld%grid%subdomain%internal%xstop\n"
+ " jstop = hfld%grid%subdomain%internal%ystop\n"
+ " do j = 1, jstop + 1, 1\n"
+ " do i = 1, istop + 1, 1\n"
+ " call apply_bcs_h_code(i, j, hfld%data, pfld%data, "
"ufld%data, vfld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_apply_bcs_h\n"
- " END MODULE psy_single_invoke_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_apply_bcs_h\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -881,29 +897,30 @@ def test_sw_offset_all_cu_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_apply_bcs_u(ufld, vfld)\n"
- " USE kernel_sw_offset_cu_mod, ONLY: apply_bcs_u_code\n"
- " TYPE(r2d_field), intent(inout) :: ufld\n"
- " TYPE(r2d_field), intent(inout) :: vfld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = ufld%grid%subdomain%internal%xstop\n"
- " jstop = ufld%grid%subdomain%internal%ystop\n"
- " DO j = 1, jstop + 1, 1\n"
- " DO i = 1, istop + 1, 1\n"
- " CALL apply_bcs_u_code(i, j, ufld%data, vfld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_apply_bcs_u\n"
- " END MODULE psy_single_invoke_test")
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_apply_bcs_u(ufld, vfld)\n"
+ " use kernel_sw_offset_cu_mod, only : apply_bcs_u_code\n"
+ " type(r2d_field), intent(inout) :: ufld\n"
+ " type(r2d_field), intent(inout) :: vfld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = ufld%grid%subdomain%internal%xstop\n"
+ " jstop = ufld%grid%subdomain%internal%ystop\n"
+ " do j = 1, jstop + 1, 1\n"
+ " do i = 1, istop + 1, 1\n"
+ " call apply_bcs_u_code(i, j, ufld%data, vfld%data)\n"
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_apply_bcs_u\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -929,29 +946,30 @@ def test_sw_offset_all_cv_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_apply_bcs_v(vfld, ufld)\n"
- " USE kernel_sw_offset_cv_mod, ONLY: apply_bcs_v_code\n"
- " TYPE(r2d_field), intent(inout) :: vfld\n"
- " TYPE(r2d_field), intent(inout) :: ufld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = vfld%grid%subdomain%internal%xstop\n"
- " jstop = vfld%grid%subdomain%internal%ystop\n"
- " DO j = 1, jstop + 1, 1\n"
- " DO i = 1, istop + 1, 1\n"
- " CALL apply_bcs_v_code(i, j, vfld%data, ufld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_apply_bcs_v\n"
- " END MODULE psy_single_invoke_test")
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_apply_bcs_v(vfld, ufld)\n"
+ " use kernel_sw_offset_cv_mod, only : apply_bcs_v_code\n"
+ " type(r2d_field), intent(inout) :: vfld\n"
+ " type(r2d_field), intent(inout) :: ufld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = vfld%grid%subdomain%internal%xstop\n"
+ " jstop = vfld%grid%subdomain%internal%ystop\n"
+ " do j = 1, jstop + 1, 1\n"
+ " do i = 1, istop + 1, 1\n"
+ " call apply_bcs_v_code(i, j, vfld%data, ufld%data)\n"
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_apply_bcs_v\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -977,31 +995,32 @@ def test_offset_any_all_cu_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_compute_u(ufld, vfld, hfld)\n"
- " USE kernel_any_offset_cu_mod, ONLY: compute_u_code\n"
- " TYPE(r2d_field), intent(inout) :: ufld\n"
- " TYPE(r2d_field), intent(inout) :: vfld\n"
- " TYPE(r2d_field), intent(inout) :: hfld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = ufld%grid%subdomain%internal%xstop\n"
- " jstop = ufld%grid%subdomain%internal%ystop\n"
- " DO j = 1, jstop, 1\n"
- " DO i = 1, istop, 1\n"
- " CALL compute_u_code(i, j, ufld%data, vfld%data, "
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_compute_u(ufld, vfld, hfld)\n"
+ " use kernel_any_offset_cu_mod, only : compute_u_code\n"
+ " type(r2d_field), intent(inout) :: ufld\n"
+ " type(r2d_field), intent(inout) :: vfld\n"
+ " type(r2d_field), intent(inout) :: hfld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = ufld%grid%subdomain%internal%xstop\n"
+ " jstop = ufld%grid%subdomain%internal%ystop\n"
+ " do j = 1, jstop, 1\n"
+ " do i = 1, istop, 1\n"
+ " call compute_u_code(i, j, ufld%data, vfld%data, "
"hfld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_compute_u\n"
- " END MODULE psy_single_invoke_test")
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_compute_u\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -1027,29 +1046,30 @@ def test_offset_any_all_points(tmpdir):
generated_code = str(psy.gen)
expected_output = (
- " MODULE psy_single_invoke_test\n"
- " USE field_mod\n"
- " USE kind_params_mod\n"
- " IMPLICIT NONE\n"
- " CONTAINS\n"
- " SUBROUTINE invoke_0_copy(voldfld, vfld)\n"
- " USE kernel_field_copy_mod, ONLY: field_copy_code\n"
- " TYPE(r2d_field), intent(inout) :: voldfld\n"
- " TYPE(r2d_field), intent(inout) :: vfld\n"
- " INTEGER j\n"
- " INTEGER i\n"
- " INTEGER istop\n"
- " INTEGER jstop\n\n"
- " ! Look-up loop bounds\n"
- " istop = voldfld%grid%subdomain%internal%xstop\n"
- " jstop = voldfld%grid%subdomain%internal%ystop\n"
- " DO j = 1, jstop + 1, 1\n"
- " DO i = 1, istop + 1, 1\n"
- " CALL field_copy_code(i, j, voldfld%data, vfld%data)\n"
- " END DO\n"
- " END DO\n\n"
- " END SUBROUTINE invoke_0_copy\n"
- " END MODULE psy_single_invoke_test")
+ "module psy_single_invoke_test\n"
+ " use field_mod\n"
+ " use kind_params_mod\n"
+ " implicit none\n"
+ " public\n\n"
+ " contains\n"
+ " subroutine invoke_0_copy(voldfld, vfld)\n"
+ " use kernel_field_copy_mod, only : field_copy_code\n"
+ " type(r2d_field), intent(inout) :: voldfld\n"
+ " type(r2d_field), intent(inout) :: vfld\n"
+ " integer :: j\n"
+ " integer :: i\n"
+ " integer :: istop\n"
+ " integer :: jstop\n\n"
+ " ! Look-up loop bounds\n"
+ " istop = voldfld%grid%subdomain%internal%xstop\n"
+ " jstop = voldfld%grid%subdomain%internal%ystop\n"
+ " do j = 1, jstop + 1, 1\n"
+ " do i = 1, istop + 1, 1\n"
+ " call field_copy_code(i, j, voldfld%data, vfld%data)\n"
+ " enddo\n"
+ " enddo\n\n"
+ " end subroutine invoke_0_copy\n\n"
+ "end module psy_single_invoke_test\n")
assert generated_code == expected_output
assert GOceanBuild(tmpdir).code_compiles(psy)
@@ -1127,8 +1147,8 @@ def test00p1_invoke_kernel_using_const_scalar():
_, invoke_info = parse(filename, api="gocean")
out = str(PSyFactory(API).create(invoke_info).gen)
# Old versions of PSyclone tried to declare '0' as a variable:
- # REAL(KIND=wp), intent(inout) :: 0
- # INTEGER, intent(inout) :: 0
+ # real(kind=wp), intent(inout) :: 0
+ # integer, intent(inout) :: 0
# Make sure this is not happening anymore
assert re.search(r"\s*real.*:: *0", out, re.I) is None
assert re.search(r"\s*integer.*:: *0", out, re.I) is None
diff --git a/src/psyclone/tests/gocean_build.py b/src/psyclone/tests/gocean_build.py
index c693e804ac..cb5b50b664 100644
--- a/src/psyclone/tests/gocean_build.py
+++ b/src/psyclone/tests/gocean_build.py
@@ -37,8 +37,6 @@
''' Module containing configuration required to build code generated
for the GOcean1.0 API '''
-from __future__ import absolute_import, print_function
-
import os
import subprocess
import sys
@@ -155,19 +153,20 @@ class GOceanOpenCLBuild(GOceanBuild):
'''
def code_compiles(self, psy_ast, dependencies=None):
- '''Attempts to build the OpenCL Fortran code supplied as an AST of
- f2pygen objects. Returns True for success, False otherwise.
+ '''
+ Use the given GOcean PSy class to generate the necessary PSyKAl
+ components to compile the OpenCL version of the psy-layer. Returns True
+ for success, False otherwise.
If no Fortran compiler is available then returns True. All files
produced are deleted.
:param psy_ast: the AST of the generated PSy layer.
:type psy_ast: instance of :py:class:`psyclone.psyGen.PSy`
- :param dependencies: optional module- or file-names on which \
- one or more of the kernels/PSy-layer depend (and \
- that are not part of the GOcean infrastructure, \
- dl_esm_inf). These dependencies will be built in \
- the order they occur in this list.
+ :param dependencies: optional module- or file-names on which one or
+ more of the kernels/PSy-layer depend (and that are not part of the
+ GOcean infrastructure, dl_esm_inf). These dependencies will be
+ built in the order they occur in this list.
:type dependencies: list of str or NoneType
:return: True if generated code compiles, False otherwise.
diff --git a/src/psyclone/tests/kernel_tools_test.py b/src/psyclone/tests/kernel_tools_test.py
index 3be645292c..f0f6c4ff57 100644
--- a/src/psyclone/tests/kernel_tools_test.py
+++ b/src/psyclone/tests/kernel_tools_test.py
@@ -65,7 +65,7 @@ def test_run_default_mode(capsys):
"test_files", "dynamo0p3", "testkern_w0_mod.f90")
kernel_tools.run([str(kern_file), "-api", "lfric"])
out, err = capsys.readouterr()
- assert "Kernel-stub code:\n MODULE testkern_w0_mod\n" in out
+ assert "Kernel-stub code:\n module testkern_w0_mod\n" in out
assert not err
@@ -79,7 +79,7 @@ def test_run(capsys, tmpdir):
"-gen", "stub"])
result, _ = capsys.readouterr()
assert "Kernel-stub code:" in result
- assert "MODULE testkern_w0_mod" in result
+ assert "module testkern_w0_mod" in result
# Test without --limit, but with -o:
psy_file = tmpdir.join("psy.f90")
@@ -90,7 +90,7 @@ def test_run(capsys, tmpdir):
# Now read output file into a string and check:
with psy_file.open("r") as psy:
output = psy.read()
- assert "MODULE testkern_w0_mod" in str(output)
+ assert "module testkern_w0_mod" in str(output)
def test_run_version(capsys):
diff --git a/src/psyclone/tests/lfric_ref_elem_test.py b/src/psyclone/tests/lfric_ref_elem_test.py
index a6f17f4372..a5d91f8d73 100644
--- a/src/psyclone/tests/lfric_ref_elem_test.py
+++ b/src/psyclone/tests/lfric_ref_elem_test.py
@@ -200,12 +200,14 @@ def test_refelem_gen(tmpdir):
psy, _ = get_invoke("23.1_ref_elem_invoke.f90", TEST_API,
dist_mem=False, idx=0)
- assert LFRicBuild(tmpdir).code_compiles(psy)
gen = str(psy.gen).lower()
- assert "use reference_element_mod, only: reference_element_type" in gen
- assert "integer(kind=i_def) nfaces_re_h, nfaces_re_v" in gen
- assert ("real(kind=r_def), allocatable :: normals_to_horiz_faces(:,:), "
- "normals_to_vert_faces(:,:)" in gen)
+ assert "use reference_element_mod, only : reference_element_type" in gen
+ assert "integer(kind=i_def) :: nfaces_re_h" in gen
+ assert "integer(kind=i_def) :: nfaces_re_v" in gen
+ assert ("real(kind=r_def), allocatable, dimension(:,:) :: "
+ "normals_to_horiz_faces" in gen)
+ assert ("real(kind=r_def), allocatable, dimension(:,:) :: "
+ "normals_to_vert_faces" in gen)
assert ("class(reference_element_type), pointer :: reference_element "
"=> null()" in gen)
# We need a mesh object in order to get a reference_element object
@@ -224,6 +226,7 @@ def test_refelem_gen(tmpdir):
"map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, "
"undf_w3, map_w3(:,cell), nfaces_re_h, nfaces_re_v, "
"normals_to_horiz_faces, normals_to_vert_faces)" in gen)
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_duplicate_refelem_gen(tmpdir):
@@ -232,11 +235,13 @@ def test_duplicate_refelem_gen(tmpdir):
psy, _ = get_invoke("23.2_multi_ref_elem_invoke.f90", TEST_API,
dist_mem=False, idx=0)
- assert LFRicBuild(tmpdir).code_compiles(psy)
gen = str(psy.gen).lower()
assert gen.count(
- "real(kind=r_def), allocatable :: normals_to_horiz_faces(:,:)"
- ", normals_to_vert_faces(:,:)") == 1
+ "real(kind=r_def), allocatable, dimension(:,:) :: "
+ "normals_to_horiz_faces") == 1
+ assert gen.count(
+ "real(kind=r_def), allocatable, dimension(:,:) :: "
+ "normals_to_vert_faces") == 1
assert gen.count(
"reference_element => mesh%get_reference_element") == 1
assert gen.count(
@@ -257,6 +262,7 @@ def test_duplicate_refelem_gen(tmpdir):
"map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, "
"undf_w3, map_w3(:,cell), nfaces_re_h, nfaces_re_v, "
"normals_to_horiz_faces, normals_to_vert_faces)" in gen)
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_union_refelem_gen(tmpdir):
@@ -265,20 +271,19 @@ def test_union_refelem_gen(tmpdir):
psy, _ = get_invoke("23.3_shared_ref_elem_invoke.f90", TEST_API,
dist_mem=False, idx=0)
- assert LFRicBuild(tmpdir).code_compiles(psy)
gen = str(psy.gen).lower()
assert (
- " reference_element => mesh%get_reference_element()\n"
- " nfaces_re_h = reference_element%get_number_horizontal_faces()\n"
- " nfaces_re_v = reference_element%get_number_vertical_faces()\n"
- " call reference_element%get_normals_to_horizontal_faces("
+ " reference_element => mesh%get_reference_element()\n"
+ " nfaces_re_h = reference_element%get_number_horizontal_faces()\n"
+ " nfaces_re_v = reference_element%get_number_vertical_faces()\n"
+ " call reference_element%get_normals_to_horizontal_faces("
"normals_to_horiz_faces)\n"
- " call reference_element%get_outward_normals_to_horizontal_faces("
+ " call reference_element%get_outward_normals_to_horizontal_faces("
"out_normals_to_horiz_faces)\n"
- " call reference_element%get_normals_to_vertical_faces("
+ " call reference_element%get_normals_to_vertical_faces("
"normals_to_vert_faces)\n"
- " call reference_element%get_outward_normals_to_vertical_faces("
+ " call reference_element%get_outward_normals_to_vertical_faces("
"out_normals_to_vert_faces)\n" in gen)
assert ("call testkern_ref_elem_code(nlayers_f1, a, f1_data, "
"f2_data, m1_data, m2_data, ndf_w1, undf_w1, "
@@ -291,6 +296,7 @@ def test_union_refelem_gen(tmpdir):
" map_w3(:,cell), nfaces_re_v, nfaces_re_h, "
"out_normals_to_vert_faces, normals_to_vert_faces, "
"out_normals_to_horiz_faces)" in gen)
+ assert LFRicBuild(tmpdir).code_compiles(psy)
def test_all_faces_refelem_gen(tmpdir):
@@ -304,10 +310,10 @@ def test_all_faces_refelem_gen(tmpdir):
gen = str(psy.gen).lower()
assert (
- " reference_element => mesh%get_reference_element()\n"
- " nfaces_re = reference_element%get_number_faces()\n"
- " call reference_element%get_normals_to_faces(normals_to_faces)\n"
- " call reference_element%get_outward_normals_to_faces("
+ " reference_element => mesh%get_reference_element()\n"
+ " nfaces_re = reference_element%get_number_faces()\n"
+ " call reference_element%get_normals_to_faces(normals_to_faces)\n"
+ " call reference_element%get_outward_normals_to_faces("
"out_normals_to_faces)\n" in gen)
assert ("call testkern_ref_elem_all_faces_code(nlayers_f1, a, f1_data, "
"f2_data, m1_data, m2_data, ndf_w1, undf_w1, "
@@ -329,7 +335,7 @@ def test_refelem_no_rdef(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
gen = str(psy.gen).lower()
- assert "use constants_mod, only: r_solver, r_def, i_def" in gen
+ assert "use constants_mod" in gen
def test_ref_element_symbols():
diff --git a/src/psyclone/tests/nemo/transformations/openacc/data_directive_test.py b/src/psyclone/tests/nemo/transformations/openacc/data_directive_test.py
index 5c477fed32..409b10de82 100644
--- a/src/psyclone/tests/nemo/transformations/openacc/data_directive_test.py
+++ b/src/psyclone/tests/nemo/transformations/openacc/data_directive_test.py
@@ -42,7 +42,6 @@
import os
import pytest
-from psyclone.errors import InternalError
from psyclone.psyGen import TransInfo
from psyclone.psyir.nodes import ACCDataDirective, Schedule, Routine
from psyclone.psyir.transformations import TransformationError, ACCKernelsTrans
@@ -80,17 +79,17 @@ def test_explicit(fortran_reader, fortran_writer):
schedule = psyir.walk(Routine)[0]
acc_trans = TransInfo().get_trans_name('ACCDataTrans')
acc_trans.apply(schedule.children)
- gen_code = fortran_writer(psyir)
+ code = fortran_writer(psyir)
assert (" real, dimension(jpi,jpj,jpk) :: umask\n"
"\n"
" !$acc data copyout(umask)\n"
- " do jk = 1, jpk") in gen_code
+ " do jk = 1, jpk") in code
assert (" enddo\n"
" !$acc end data\n"
"\n"
- "end program explicit_do") in gen_code
+ "end program explicit_do") in code
def test_data_single_node(fortran_reader):
@@ -103,19 +102,6 @@ def test_data_single_node(fortran_reader):
assert isinstance(schedule[0], ACCDataDirective)
-def test_data_no_gen_code(fortran_reader):
- ''' Check that the ACCDataDirective.gen_code() method raises the
- expected InternalError as it should not be called. '''
- psyir = fortran_reader.psyir_from_source(EXPLICIT_DO)
- schedule = psyir.walk(Routine)[0]
- acc_trans = TransInfo().get_trans_name('ACCDataTrans')
- acc_trans.apply(schedule.children[0:2])
- with pytest.raises(InternalError) as err:
- schedule.children[0].gen_code(schedule)
- assert ("ACCDataDirective.gen_code should not have "
- "been called" in str(err.value))
-
-
def test_explicit_directive(fortran_reader, fortran_writer):
'''Check code generation for a single explicit loop containing a
kernel with a pre-existing (openacc kernels) directive.
@@ -127,19 +113,19 @@ def test_explicit_directive(fortran_reader, fortran_writer):
acc_trans.apply(schedule.children, {"default_present": True})
acc_trans = TransInfo().get_trans_name('ACCDataTrans')
acc_trans.apply(schedule.children)
- gen_code = fortran_writer(psyir)
+ code = fortran_writer(psyir)
assert (" real, dimension(jpi,jpj,jpk) :: umask\n"
"\n"
" !$acc data copyout(umask)\n"
" !$acc kernels default(present)\n"
- " do jk = 1, jpk, 1") in gen_code
+ " do jk = 1, jpk, 1") in code
assert (" enddo\n"
" !$acc end kernels\n"
" !$acc end data\n"
"\n"
- "end program explicit_do") in gen_code
+ "end program explicit_do") in code
def test_array_syntax(fortran_reader, fortran_writer):
@@ -167,18 +153,18 @@ def test_array_syntax(fortran_reader, fortran_writer):
# regions so just put two of the loops into regions.
acc_trans.apply([schedule.children[0]])
acc_trans.apply([schedule.children[-1]])
- gen_code = fortran_writer(psyir)
+ code = fortran_writer(psyir)
assert (" real(kind=wp), dimension(jpi,jpj,jpk) :: ztfw\n"
"\n"
" !$acc data copyout(zftv)\n"
- " zftv(:,:,:) = 0.0d0" in gen_code)
+ " zftv(:,:,:) = 0.0d0" in code)
assert (" !$acc data copyout(tmask)\n"
" tmask(:,:) = jpi\n"
" !$acc end data\n"
"\n"
- "end subroutine tra_ldf_iso" in gen_code)
+ "end subroutine tra_ldf_iso" in code)
def test_multi_data(fortran_reader, fortran_writer):
@@ -189,22 +175,22 @@ def test_multi_data(fortran_reader, fortran_writer):
acc_trans = TransInfo().get_trans_name('ACCDataTrans')
acc_trans.apply(schedule.children[0].loop_body[0:2])
acc_trans.apply(schedule.children[0].loop_body[1:3])
- gen_code = fortran_writer(psyir)
+ code = fortran_writer(psyir)
assert (" do jk = 1, jpkm1, 1\n"
" !$acc data copyin(ptb,wmask), copyout(zdk1t,zdkt)\n"
- " do jj = 1, jpj, 1") in gen_code
+ " do jj = 1, jpj, 1") in code
assert (" end if\n"
" !$acc end data\n"
" !$acc data copyin(e2_e1u,e2u,e3t_n,e3u_n,pahu,r1_e1e2t,"
"umask,uslp,wmask,zdit,zdk1t,zdkt,zftv), copyout(zftu), "
"copy(pta)\n"
- " do jj = 1, jpjm1, 1") in gen_code
+ " do jj = 1, jpjm1, 1") in code
assert (" enddo\n"
" !$acc end data\n"
- " enddo") in gen_code
+ " enddo") in code
def test_replicated_loop(fortran_reader, fortran_writer, tmpdir):
@@ -223,15 +209,15 @@ def test_replicated_loop(fortran_reader, fortran_writer, tmpdir):
acc_trans = TransInfo().get_trans_name('ACCDataTrans')
acc_trans.apply(schedule.children[0:1])
acc_trans.apply(schedule.children[1:2])
- gen_code = fortran_writer(psyir)
+ code = fortran_writer(psyir)
assert (" !$acc data copyout(zwx)\n"
" zwx(:,:) = 0.e0\n"
" !$acc end data\n"
" !$acc data copyout(zwx)\n"
" zwx(:,:) = 0.e0\n"
- " !$acc end data" in gen_code)
- assert Compile(tmpdir).string_compiles(gen_code)
+ " !$acc end data" in code)
+ assert Compile(tmpdir).string_compiles(code)
def test_data_ref(fortran_reader, fortran_writer):
@@ -252,8 +238,8 @@ def test_data_ref(fortran_reader, fortran_writer):
schedule = psyir.walk(Routine)[0]
acc_trans = TransInfo().get_trans_name('ACCDataTrans')
acc_trans.apply(schedule.children)
- gen_code = fortran_writer(psyir)
- assert "!$acc data copyin(a), copyout(prof,prof%npind)" in gen_code
+ code = fortran_writer(psyir)
+ assert "!$acc data copyin(a), copyout(prof,prof%npind)" in code
def test_data_ref_read(fortran_reader, fortran_writer):
@@ -272,8 +258,8 @@ def test_data_ref_read(fortran_reader, fortran_writer):
schedule = psyir.walk(Routine)[0]
acc_trans = TransInfo().get_trans_name('ACCDataTrans')
acc_trans.apply(schedule.children)
- gen_code = fortran_writer(psyir)
- assert "copyin(fld,fld%data)" in gen_code
+ code = fortran_writer(psyir)
+ assert "copyin(fld,fld%data)" in code
def test_multi_array_derived_type(fortran_reader, fortran_writer):
@@ -295,9 +281,9 @@ def test_multi_array_derived_type(fortran_reader, fortran_writer):
schedule = psyir.walk(Schedule)[0]
acc_trans = TransInfo().get_trans_name('ACCDataTrans')
acc_trans.apply(schedule.children)
- gen_code = fortran_writer(psyir)
+ code = fortran_writer(psyir)
assert ("!$acc data copyin(small_holding,small_holding(2)%data), "
- "copyout(sto_tmp)" in gen_code)
+ "copyout(sto_tmp)" in code)
def test_multi_array_derived_type_error(fortran_reader):
@@ -350,8 +336,8 @@ def test_array_section(fortran_reader, fortran_writer):
schedule = psyir.walk(Schedule)[0]
acc_trans = TransInfo().get_trans_name('ACCDataTrans')
acc_trans.apply(schedule.children)
- gen_code = fortran_writer(psyir)
- assert "!$acc data copyin(b,c), copyout(a)" in gen_code
+ code = fortran_writer(psyir)
+ assert "!$acc data copyin(b,c), copyout(a)" in code
def test_kind_parameter(fortran_reader, fortran_writer):
@@ -369,9 +355,9 @@ def test_kind_parameter(fortran_reader, fortran_writer):
schedule = psyir.walk(Schedule)[0]
acc_trans = TransInfo().get_trans_name('ACCDataTrans')
acc_trans.apply(schedule.children[0:1])
- gen_code = fortran_writer(psyir)
+ code = fortran_writer(psyir)
- assert "copyin(wp)" not in gen_code.lower()
+ assert "copyin(wp)" not in code.lower()
def test_no_copyin_intrinsics(fortran_reader, fortran_writer):
@@ -391,9 +377,9 @@ def test_no_copyin_intrinsics(fortran_reader, fortran_writer):
psy = fortran_reader.psyir_from_source(code)
schedule = psy.walk(Routine)[0]
acc_trans.apply(schedule.children[0:1])
- gen_code = fortran_writer(psy)
+ code = fortran_writer(psy)
idx = intrinsic.index("(")
- assert f"copyin({intrinsic[0:idx]})" not in gen_code.lower()
+ assert f"copyin({intrinsic[0:idx]})" not in code.lower()
def test_no_code_blocks(fortran_reader):
@@ -486,8 +472,8 @@ def test_array_access_in_ifblock(fortran_reader, fortran_writer):
acc_trans = TransInfo().get_trans_name('ACCDataTrans')
# Put the second loop nest inside a data region
acc_trans.apply(schedule.children[1:])
- gen_code = fortran_writer(psyir)
- assert " copyin(zmask)" in gen_code
+ code = fortran_writer(psyir)
+ assert " copyin(zmask)" in code
def test_array_access_loop_bounds(fortran_reader, fortran_writer):
@@ -509,5 +495,5 @@ def test_array_access_loop_bounds(fortran_reader, fortran_writer):
acc_trans = TransInfo().get_trans_name('ACCDataTrans')
# Put the second loop nest inside a data region
acc_trans.apply(schedule.children)
- gen_code = fortran_writer(psyir)
- assert "copyin(trim_width)" in gen_code
+ code = fortran_writer(psyir)
+ assert "copyin(trim_width)" in code
diff --git a/src/psyclone/tests/nemo/transformations/openmp/openmp_test.py b/src/psyclone/tests/nemo/transformations/openmp/openmp_test.py
index 68188ee041..d7578b1eff 100644
--- a/src/psyclone/tests/nemo/transformations/openmp/openmp_test.py
+++ b/src/psyclone/tests/nemo/transformations/openmp/openmp_test.py
@@ -155,13 +155,13 @@ def test_omp_parallel_multi(fortran_reader, fortran_writer):
# loop nests (Python's slice notation is such that the expression below
# gives elements 2-3).
otrans.apply(schedule[0].loop_body[2:4])
- gen_code = fortran_writer(psyir).lower()
+ code = fortran_writer(psyir).lower()
assert (" !$omp parallel default(shared), private(ji,jj,zabe1,zcof1,"
"zmsku)\n"
" do jj = 1, jpjm1, 1\n"
" do ji = 1, jpim1, 1\n"
" zabe1 = pahu(ji,jj,jk) * e2_e1u(ji,jj) * "
- "e3u_n(ji,jj,jk)\n" in gen_code)
+ "e3u_n(ji,jj,jk)\n" in code)
assert (" do jj = 2, jpjm1, 1\n"
" do ji = 2, jpim1, 1\n"
" pta(ji,jj,jk,jn) = pta(ji,jj,jk,jn) + "
@@ -170,7 +170,7 @@ def test_omp_parallel_multi(fortran_reader, fortran_writer):
"e3t_n(ji,jj,jk)\n"
" enddo\n"
" enddo\n"
- " !$omp end parallel\n" in gen_code)
+ " !$omp end parallel\n" in code)
directive = schedule[0].loop_body[2]
assert isinstance(directive, OMPParallelDirective)
@@ -208,7 +208,7 @@ def test_omp_do_code_gen(fortran_reader, fortran_writer):
.else_body[0].else_body[0])
loop_trans.apply(schedule[0].loop_body[1]
.else_body[0].else_body[0].dir_body[0])
- gen_code = fortran_writer(psyir).lower()
+ code = fortran_writer(psyir).lower()
correct = ''' !$omp parallel default(shared), private(ji,jj)
!$omp do schedule(auto)
do jj = 1, jpj, 1
@@ -219,7 +219,7 @@ def test_omp_do_code_gen(fortran_reader, fortran_writer):
enddo
!$omp end do
!$omp end parallel'''
- assert correct in gen_code
+ assert correct in code
directive = schedule[0].loop_body[1].else_body[0].else_body[0].dir_body[0]
assert isinstance(directive, OMPDoDirective)
diff --git a/src/psyclone/tests/nemo/transformations/profiling/nemo_profile_test.py b/src/psyclone/tests/nemo/transformations/profiling/nemo_profile_test.py
index 28a98bb1bd..e3508dbf21 100644
--- a/src/psyclone/tests/nemo/transformations/profiling/nemo_profile_test.py
+++ b/src/psyclone/tests/nemo/transformations/profiling/nemo_profile_test.py
@@ -192,11 +192,11 @@ def test_profile_inside_if1(fortran_reader, fortran_writer):
"end subroutine inside_if_test\n")
schedule = psyir.children[0]
PTRANS.apply(schedule.children[0].if_body[0])
- gen_code = fortran_writer(psyir).lower()
+ code = fortran_writer(psyir).lower()
assert (" if (do_this) then\n"
- " call profile_psy_data % prestart(" in gen_code)
+ " call profile_psy_data % prestart(" in code)
assert (" call profile_psy_data % postend\n"
- " end if\n" in gen_code)
+ " end if\n" in code)
def test_profile_inside_if2(fortran_reader, fortran_writer):
@@ -217,11 +217,11 @@ def test_profile_inside_if2(fortran_reader, fortran_writer):
"end subroutine inside_if_test\n")
schedule = psyir.children[0]
PTRANS.apply(schedule.children[0].if_body)
- gen_code = fortran_writer(psyir).lower()
+ code = fortran_writer(psyir).lower()
assert (" if (do_this) then\n"
- " call profile_psy_data % prestart(" in gen_code)
+ " call profile_psy_data % prestart(" in code)
assert (" call profile_psy_data % postend\n"
- " end if\n" in gen_code)
+ " end if\n" in code)
def test_profile_single_line_if(fortran_reader, fortran_writer):
@@ -238,16 +238,16 @@ def test_profile_single_line_if(fortran_reader, fortran_writer):
"end subroutine one_line_if_test\n")
schedule = psyir.children[0]
PTRANS.apply(schedule[0].if_body)
- gen_code = fortran_writer(psyir).lower()
+ code = fortran_writer(psyir).lower()
assert (
" if (do_this) then\n"
" call profile_psy_data % prestart(\"one_line_if_test\", \"r0\", 0,"
- " 0)\n"
+ " 0)\n\n"
" ! psyclone codeblock (unsupported code) reason:\n"
" ! - unsupported statement: write_stmt\n"
" write(*, *) sto_tmp2(ji)\n"
" call profile_psy_data % postend\n"
- " end if\n" in gen_code)
+ " end if\n" in code)
def test_profiling_case(fortran_reader, fortran_writer):
diff --git a/src/psyclone/tests/parse/utils_test.py b/src/psyclone/tests/parse/utils_test.py
index e8ec4a1d3d..2df29752c8 100644
--- a/src/psyclone/tests/parse/utils_test.py
+++ b/src/psyclone/tests/parse/utils_test.py
@@ -37,7 +37,6 @@
file.
'''
-from __future__ import absolute_import
import tempfile
import pytest
diff --git a/src/psyclone/tests/psyGen_test.py b/src/psyclone/tests/psyGen_test.py
index 1cf1d6caa5..1ea3723928 100644
--- a/src/psyclone/tests/psyGen_test.py
+++ b/src/psyclone/tests/psyGen_test.py
@@ -63,7 +63,7 @@
from psyclone.psyGen import (TransInfo, Transformation, PSyFactory,
InlinedKern, object_index, HaloExchange, Invoke,
DataAccess, Kern, Arguments, CodedKern, Argument,
- GlobalSum, InvokeSchedule, BuiltIn)
+ GlobalSum, InvokeSchedule)
from psyclone.psyir.nodes import (Assignment, BinaryOperation, Container,
Literal, Loop, Node, KernelSchedule, Call,
colored, Schedule)
@@ -76,8 +76,9 @@
from psyclone.tests.utilities import get_invoke
from psyclone.transformations import (Dynamo0p3RedundantComputationTrans,
Dynamo0p3KernelConstTrans,
+ Dynamo0p3ColourTrans,
Dynamo0p3OMPLoopTrans,
- Dynamo0p3ColourTrans, OMPParallelTrans)
+ OMPParallelTrans)
from psyclone.psyir.backend.visitor import VisitorError
@@ -373,13 +374,15 @@ def test_derived_type_deref_naming(tmpdir):
assert LFRicBuild(tmpdir).code_compiles(psy)
output = (
- " SUBROUTINE invoke_0_testkern_type"
+ " subroutine invoke_0_testkern_type"
"(a, f1_my_field, f1_my_field_1, m1, m2)\n"
- " USE testkern_mod, ONLY: testkern_code\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " REAL(KIND=r_def), intent(in) :: a\n"
- " TYPE(field_type), intent(in) :: f1_my_field, f1_my_field_1, "
- "m1, m2\n")
+ " use mesh_mod, only : mesh_type\n"
+ " use testkern_mod, only : testkern_code\n"
+ " real(kind=r_def), intent(in) :: a\n"
+ " type(field_type), intent(in) :: f1_my_field\n"
+ " type(field_type), intent(in) :: f1_my_field_1\n"
+ " type(field_type), intent(in) :: m1\n"
+ " type(field_type), intent(in) :: m2\n ")
assert output in generated_code
@@ -435,16 +438,16 @@ def test_invokeschedule_can_be_printed():
assert "InvokeSchedule:\n" in output
-def test_invokeschedule_gen_code_with_preexisting_globals():
- ''' Check the InvokeSchedule gen_code adds pre-existing SymbolTable global
- variables into the generated f2pygen code. Multiple globals imported from
- the same module will be part of a single USE statement.'''
+def test_invokeschedule_lowering_with_preexisting_globals():
+ ''' Check the InvokeSchedule lowering adds pre-existing SymbolTable global
+ variables. Multiple globals imported from the same module will be part of
+ a single USE statement.'''
_, invoke_info = parse(os.path.join(BASE_PATH,
"15.9.1_X_innerproduct_Y_builtin.f90"),
api="lfric")
psy = PSyFactory("lfric", distributed_memory=True).create(invoke_info)
- # Add some globals into the SymbolTable before calling gen_code()
+ # Add some globals into the SymbolTable before calling the backend
schedule = psy.invokes.invoke_list[0].schedule
my_mod = ContainerSymbol("my_mod")
schedule.symbol_table.add(my_mod)
@@ -453,7 +456,7 @@ def test_invokeschedule_gen_code_with_preexisting_globals():
schedule.symbol_table.add(global1)
schedule.symbol_table.add(global2)
- assert "USE my_mod, ONLY: gvar1, gvar2" in str(psy.gen)
+ assert "use my_mod, only : gvar1, gvar2" in str(psy.gen)
# Kern class test
@@ -529,7 +532,7 @@ def test_codedkern_module_inline_getter_and_setter():
in str(err.value))
-def test_codedkern_module_inline_gen_code(tmpdir):
+def test_codedkern_module_inline_lowering(tmpdir):
''' Check that a CodedKern with module-inline gets copied into the
local module appropriately when the PSy-layer is generated'''
# Use LFRic example with a repeated CodedKern
@@ -543,8 +546,8 @@ def test_codedkern_module_inline_gen_code(tmpdir):
gen = str(psy.gen)
# Without module-inline the subroutine is used by a module import
- assert "USE ru_kernel_mod, ONLY: ru_code" in gen
- assert "SUBROUTINE ru_code(" not in gen
+ assert "use ru_kernel_mod, only : ru_code" in gen
+ assert "subroutine ru_code(" not in gen
# With module-inline the subroutine does not need to be imported
coded_kern.module_inline = True
@@ -557,11 +560,11 @@ def test_codedkern_module_inline_gen_code(tmpdir):
"this module." in str(err.value))
# Create the symbol and try again, it now must succeed
- schedule.ancestor(Container).symbol_table.new_symbol(
+ psy.container.symbol_table.new_symbol(
"ru_code", symbol_type=RoutineSymbol)
gen = str(psy.gen)
- assert "USE ru_kernel_mod, ONLY: ru_code" not in gen
+ assert "use ru_kernel_mod, only : ru_code" not in gen
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -577,7 +580,7 @@ def test_codedkern_module_inline_kernel_in_multiple_invokes(tmpdir):
# By default the kernel is imported once per invoke
gen = str(psy.gen)
- assert gen.count("USE testkern_qr_mod, ONLY: testkern_qr_code") == 2
+ assert gen.count("use testkern_qr_mod, only : testkern_qr_code") == 2
# Module inline kernel in invoke 1
schedule = psy.invokes.invoke_list[0].schedule
@@ -591,7 +594,7 @@ def test_codedkern_module_inline_kernel_in_multiple_invokes(tmpdir):
# After this, one invoke uses the inlined top-level subroutine
# and the other imports it (shadowing the top-level symbol)
- assert gen.count("USE testkern_qr_mod, ONLY: testkern_qr_code") == 1
+ assert gen.count("use testkern_qr_mod, only : testkern_qr_code") == 1
assert LFRicBuild(tmpdir).code_compiles(psy)
# Module inline kernel in invoke 2
@@ -602,7 +605,7 @@ def test_codedkern_module_inline_kernel_in_multiple_invokes(tmpdir):
gen = str(psy.gen)
# After this, no imports are remaining and both use the same
# top-level implementation
- assert gen.count("USE testkern_qr_mod, ONLY: testkern_qr_code") == 0
+ assert gen.count("use testkern_qr_mod, only : testkern_qr_code") == 0
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -722,37 +725,6 @@ def test_inlinedkern_node_str():
assert text == "InlinedKern[]"
-def test_call_abstract_methods():
- ''' Check that calling the abstract methods of Kern raises
- the expected exceptions '''
-
- class KernType:
- ''' temporary dummy class '''
- def __init__(self):
- self.iterates_over = "stuff"
- my_ktype = KernType()
-
- class DummyClass:
- ''' temporary dummy class '''
- def __init__(self, ktype):
- self.module_name = "dummy_module"
- self.ktype = ktype
-
- class DummyArguments(Arguments):
- ''' temporary dummy class '''
- # This is a mock class, we can disable expected pylint warnings
- # pylint: disable=abstract-method, unused-argument
- def __init__(self, call, parent_call, check):
- Arguments.__init__(self, parent_call)
-
- dummy_call = DummyClass(my_ktype)
- my_call = Kern(None, dummy_call, "dummy", DummyArguments)
-
- with pytest.raises(NotImplementedError) as excinfo:
- my_call.gen_code(None)
- assert "Kern.gen_code should be implemented" in str(excinfo.value)
-
-
def test_arguments_abstract():
''' Check that we raise NotImplementedError if any of the virtual methods
of the Arguments class are called. '''
@@ -964,7 +936,7 @@ def test_reduction_var_error(dist_mem):
# args[1] is of type gh_field
call._reduction_arg = call.arguments.args[1]
with pytest.raises(GenerationError) as err:
- call.zero_reduction_variable(None)
+ call.zero_reduction_variable()
assert ("Kern.zero_reduction_variable() should be a scalar but "
"found 'gh_field'." in str(err.value))
@@ -983,13 +955,22 @@ def test_reduction_var_invalid_scalar_error(dist_mem):
schedule = psy.invokes.invoke_list[0].schedule
call = schedule.kernels()[0]
# args[5] is a scalar of data type gh_logical
+ assert call.arguments.args[5].intrinsic_type == 'logical'
call._reduction_arg = call.arguments.args[5]
with pytest.raises(GenerationError) as err:
- call.zero_reduction_variable(None)
+ call.zero_reduction_variable()
assert ("Kern.zero_reduction_variable() should be either a 'real' "
"or an 'integer' scalar but found scalar of type 'logical'."
in str(err.value))
+ # REALs and INTEGERs are fine
+ assert call.arguments.args[0].intrinsic_type == 'real'
+ call._reduction_arg = call.arguments.args[0]
+ call.zero_reduction_variable()
+ assert call.arguments.args[6].intrinsic_type == 'integer'
+ call._reduction_arg = call.arguments.args[6]
+ call.zero_reduction_variable()
+
def test_reduction_sum_error(dist_mem):
''' Check that we raise an exception if the reduction_sum_loop()
@@ -1003,7 +984,7 @@ def test_reduction_sum_error(dist_mem):
# args[1] is of type gh_field
call._reduction_arg = call.arguments.args[1]
with pytest.raises(GenerationError) as err:
- call.reduction_sum_loop(None)
+ call.reduction_sum_loop()
assert ("Unsupported reduction access 'gh_inc' found in LFRicBuiltIn:"
"reduction_sum_loop(). Expected one of ['gh_sum']."
in str(err.value))
@@ -1029,67 +1010,6 @@ def test_call_multi_reduction_error(monkeypatch, dist_mem):
"or builtin" in str(err.value))
-def test_reduction_no_set_precision(dist_mem):
- '''Test that the zero_reduction_variable() method generates correct
- code when a reduction argument does not have a defined
- precision. Only a zero value (without precision i.e. 0.0 not
- 0.0_r_def) is generated in this case.
-
- '''
- _, invoke_info = parse(
- os.path.join(BASE_PATH, "15.8.1_sum_X_builtin.f90"),
- api="lfric")
- psy = PSyFactory("lfric",
- distributed_memory=dist_mem).create(invoke_info)
-
- # A reduction argument will always have a precision value so we
- # need to monkeypatch it.
- schedule = psy.invokes.invoke_list[0].schedule
- builtin = schedule.walk(BuiltIn)[0]
- arg = builtin.arguments.args[0]
- arg._precision = ""
-
- generated_code = str(psy.gen)
-
- if dist_mem:
- zero_sum_decls = (
- " USE scalar_mod, ONLY: scalar_type\n"
- " USE mesh_mod, ONLY: mesh_type\n"
- " REAL, intent(out) :: asum\n"
- " TYPE(field_type), intent(in) :: f1\n"
- " TYPE(scalar_type) global_sum\n"
- " INTEGER(KIND=i_def) df\n")
- else:
- zero_sum_decls = (
- " REAL, intent(out) :: asum\n"
- " TYPE(field_type), intent(in) :: f1\n"
- " INTEGER(KIND=i_def) df\n")
- assert zero_sum_decls in generated_code
-
- zero_sum_output = (
- " ! Zero summation variables\n"
- " !\n"
- " asum = 0.0\n")
- assert zero_sum_output in generated_code
-
-
-def test_invokes_wrong_schedule_gen_code():
- ''' Check that the invoke.schedule reference points to an InvokeSchedule
- when using the gen_code. Otherwise rise an error. '''
- # Use LFRic example with a repeated CodedKern
- _, invoke_info = parse(
- os.path.join(BASE_PATH, "4.6_multikernel_invokes.f90"),
- api="lfric")
- psy = PSyFactory("lfric", distributed_memory=False).create(invoke_info)
-
- # Set the invoke.schedule to something else other than a InvokeSchedule
- psy.invokes.invoke_list[0].schedule = Node()
- with pytest.raises(GenerationError) as err:
- _ = psy.gen
- assert ("An invoke.schedule element of the invoke_list is a 'Node', "
- "but it should be an 'InvokeSchedule'." in str(err.value))
-
-
def test_invoke_name():
''' Check that specifying the name of an invoke in the Algorithm
layer results in a correctly-named routine in the PSy layer '''
@@ -1099,7 +1019,7 @@ def test_invoke_name():
psy = PSyFactory("lfric", distributed_memory=True).create(invoke_info)
gen = str(psy.gen)
- assert "SUBROUTINE invoke_important_invoke" in gen
+ assert "subroutine invoke_important_invoke" in gen
def test_multi_kern_named_invoke(tmpdir):
@@ -1111,7 +1031,7 @@ def test_multi_kern_named_invoke(tmpdir):
psy = PSyFactory("lfric", distributed_memory=True).create(invoke_info)
gen = str(psy.gen)
- assert "SUBROUTINE invoke_some_name" in gen
+ assert "subroutine invoke_some_name" in gen
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -1125,8 +1045,8 @@ def test_named_multi_invokes(tmpdir):
psy = PSyFactory("lfric", distributed_memory=True).create(invoke_info)
gen = str(psy.gen)
- assert "SUBROUTINE invoke_my_first(" in gen
- assert "SUBROUTINE invoke_my_second(" in gen
+ assert "subroutine invoke_my_first(" in gen
+ assert "subroutine invoke_my_second(" in gen
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -1139,9 +1059,10 @@ def test_named_invoke_name_clash(tmpdir):
api="lfric")
psy = PSyFactory("lfric", distributed_memory=True).create(invoke_info)
gen = str(psy.gen)
- assert ("SUBROUTINE invoke_a(invoke_a_1, b, istp, rdt, d, e, ascalar, "
+
+ assert ("subroutine invoke_a(invoke_a_1, b, istp, rdt, d, e, ascalar, "
"f, c, g, qr)") in gen
- assert "TYPE(field_type), intent(in) :: invoke_a_1" in gen
+ assert "type(field_type), intent(in) :: invoke_a_1" in gen
assert LFRicBuild(tmpdir).code_compiles(psy)
@@ -1165,7 +1086,7 @@ def test_invalid_reprod_pad_size(monkeypatch, dist_mem):
otrans.apply(schedule.children[0], {"reprod": True})
# Apply an OpenMP Parallel directive around the OpenMP do directive
rtrans.apply(schedule.children[0])
- with pytest.raises(GenerationError) as excinfo:
+ with pytest.raises(VisitorError) as excinfo:
_ = str(psy.gen)
assert (
f"REPROD_PAD_SIZE in {Config.get().filename} should be a positive "
diff --git a/src/psyclone/tests/psyad/adjoint_visitor_test.py b/src/psyclone/tests/psyad/adjoint_visitor_test.py
index 0e0ba98c96..42d37d9550 100644
--- a/src/psyclone/tests/psyad/adjoint_visitor_test.py
+++ b/src/psyclone/tests/psyad/adjoint_visitor_test.py
@@ -37,7 +37,6 @@
adjoint_visitor.py file within the psyad directory.
'''
-from __future__ import absolute_import
import logging
import pytest
diff --git a/src/psyclone/tests/psyad/main_test.py b/src/psyclone/tests/psyad/main_test.py
index 0821e16e8f..68fc4686de 100644
--- a/src/psyclone/tests/psyad/main_test.py
+++ b/src/psyclone/tests/psyad/main_test.py
@@ -132,17 +132,22 @@
! initialise the kernel arguments and keep copies of them
call random_number(field)
field_input = field
+
! call the tangent-linear kernel
call kern(field)
+
! compute the inner product of the results of the tangent-linear kernel
inner1 = 0.0
inner1 = inner1 + field * field
+
! call the adjoint of the kernel
call adj_kern(field)
+
! compute inner product of results of adjoint kernel with the original \
inputs to the tangent-linear kernel
inner2 = 0.0
inner2 = inner2 + field * field_input
+
! test the inner-product values for equality, allowing for the precision \
of the active variables
machinetol = spacing(max(abs(inner1), abs(inner2)))
diff --git a/src/psyclone/tests/psyad/tl2ad_test.py b/src/psyclone/tests/psyad/tl2ad_test.py
index d84fa09f1f..bf30ebe776 100644
--- a/src/psyclone/tests/psyad/tl2ad_test.py
+++ b/src/psyclone/tests/psyad/tl2ad_test.py
@@ -708,18 +708,23 @@ def test_generate_adjoint_test(fortran_reader, fortran_writer):
" real, dimension(npts) :: field_input" in harness)
assert (" call random_number(field)\n"
" field_input = field\n"
+ "\n"
" ! call the tangent-linear kernel\n"
" call kern(field, npts)\n"
+ "\n"
" ! compute the inner product of the results of the tangent-"
"linear kernel\n"
" inner1 = 0.0\n"
" inner1 = inner1 + dot_product(field, field)\n"
+ "\n"
" ! call the adjoint of the kernel\n"
" call adj_kern(field, npts)\n"
+ "\n"
" ! compute inner product of results of adjoint kernel with "
"the original inputs to the tangent-linear kernel\n"
" inner2 = 0.0\n"
" inner2 = inner2 + dot_product(field, field_input)\n"
+ "\n"
" ! test the inner-product values for equality, allowing for "
"the precision of the active variables\n"
" machinetol = spacing(max(abs(inner1), abs(inner2)))\n"
diff --git a/src/psyclone/tests/psyad/transformations/test_adjoint_trans.py b/src/psyclone/tests/psyad/transformations/test_adjoint_trans.py
index fc63ef2a65..2ffbf56673 100644
--- a/src/psyclone/tests/psyad/transformations/test_adjoint_trans.py
+++ b/src/psyclone/tests/psyad/transformations/test_adjoint_trans.py
@@ -34,7 +34,6 @@
#
'''Module to test the psyad adjoint base class transformation.'''
-from __future__ import absolute_import
import pytest
from psyclone.psyir.symbols import DataSymbol, REAL_TYPE
diff --git a/src/psyclone/tests/psyad/transformations/test_preprocess.py b/src/psyclone/tests/psyad/transformations/test_preprocess.py
index 217ae0b607..7ea5243c47 100644
--- a/src/psyclone/tests/psyad/transformations/test_preprocess.py
+++ b/src/psyclone/tests/psyad/transformations/test_preprocess.py
@@ -210,6 +210,7 @@ def test_preprocess_arrayassign2loop(tmpdir, fortran_reader, fortran_writer):
" enddo\n"
" d(1,1,1) = 0.0\n"
" e(:,:,:) = f(:,:,:)\n"
+ "\n"
" ! PSyclone CodeBlock (unsupported code) reason:\n"
" ! - Unsupported statement: Print_Stmt\n"
" PRINT *, \"hello\"\n\n"
diff --git a/src/psyclone/tests/psyir/backend/fortran_format_stmt_test.py b/src/psyclone/tests/psyir/backend/fortran_format_stmt_test.py
index 90f33be4b7..e731af6f4e 100644
--- a/src/psyclone/tests/psyir/backend/fortran_format_stmt_test.py
+++ b/src/psyclone/tests/psyir/backend/fortran_format_stmt_test.py
@@ -36,7 +36,6 @@
'''Module containing pytest tests for the handling of CodeBlocks containing
Fortran Format statements in the backend.'''
-from __future__ import absolute_import
from psyclone.psyir.nodes import Routine
diff --git a/src/psyclone/tests/psyir/backend/fortran_test.py b/src/psyclone/tests/psyir/backend/fortran_test.py
index ab06d947ae..6abec4f6fa 100644
--- a/src/psyclone/tests/psyir/backend/fortran_test.py
+++ b/src/psyclone/tests/psyir/backend/fortran_test.py
@@ -453,8 +453,8 @@ def test_gen_typedecl(fortran_writer):
"end type my_type\n")
private_tsymbol = DataTypeSymbol("my_type", dtype,
Symbol.Visibility.PRIVATE)
- gen_code = fortran_writer.gen_typedecl(private_tsymbol)
- assert gen_code.startswith("type, private :: my_type\n")
+ code = fortran_writer.gen_typedecl(private_tsymbol)
+ assert code.startswith("type, private :: my_type\n")
def test_reverse_map():
@@ -702,6 +702,13 @@ def test_fw_gen_vardecl(fortran_writer):
result = fortran_writer.gen_vardecl(symbol)
assert result == "integer, save :: dummy3a = 10\n"
+ # Generic symbol
+ symbol = Symbol("dummy1")
+ with pytest.raises(VisitorError) as excinfo:
+ _ = fortran_writer.gen_vardecl(symbol)
+ assert ("Symbol 'dummy1' must be a symbol with a datatype in order to "
+ "use 'gen_vardecl'." in str(excinfo.value))
+
# Use statement
symbol = DataSymbol("dummy1", UnresolvedType(),
interface=ImportInterface(
@@ -1578,7 +1585,7 @@ def test_fw_codeblock_1(fortran_reader, fortran_writer, tmpdir):
# Generate Fortran from the PSyIR
result = fortran_writer(psyir)
assert (
- " a = 1\n"
+ " a = 1\n\n"
" ! PSyclone CodeBlock (unsupported code) reason:\n"
" ! - Unsupported statement: Print_Stmt\n"
" ! - Unsupported statement: Print_Stmt\n"
@@ -1908,7 +1915,7 @@ def test_fw_comments(fortran_writer):
" ! My routine preceding comment\n"
" subroutine my_routine()\n\n"
" ! My statement with a preceding comment\n"
- " return\n"
+ " return\n\n"
" ! My statement with a\n"
" ! multi-line comment.\n"
" return ! ... and an inline comment\n"
diff --git a/src/psyclone/tests/psyir/backend/language_writer_test.py b/src/psyclone/tests/psyir/backend/language_writer_test.py
index a04e53490c..84fa7db764 100644
--- a/src/psyclone/tests/psyir/backend/language_writer_test.py
+++ b/src/psyclone/tests/psyir/backend/language_writer_test.py
@@ -37,8 +37,6 @@
'''Performs pytest tests on the psyclone.psyir.backend.language_writer
module.'''
-from __future__ import absolute_import
-
import pytest
from psyclone.psyir.backend.language_writer import LanguageWriter
diff --git a/src/psyclone/tests/psyir/backend/psyir_openmp_test.py b/src/psyclone/tests/psyir/backend/psyir_openmp_test.py
index f861f10202..3dc00cacb0 100644
--- a/src/psyclone/tests/psyir/backend/psyir_openmp_test.py
+++ b/src/psyclone/tests/psyir/backend/psyir_openmp_test.py
@@ -37,7 +37,6 @@
'''Performs pytest tests on the psyclone.psyir.backend.fortran and c module'''
-from __future__ import absolute_import
import pytest
from psyclone.errors import GenerationError
from psyclone.psyir.nodes import Assignment, Reference
diff --git a/src/psyclone/tests/psyir/commentable_mixin_test.py b/src/psyclone/tests/psyir/commentable_mixin_test.py
index fdebd9cff0..87ecc2cf33 100644
--- a/src/psyclone/tests/psyir/commentable_mixin_test.py
+++ b/src/psyclone/tests/psyir/commentable_mixin_test.py
@@ -36,7 +36,6 @@
''' Performs py.test tests on CommentableMixin PSyIR nodes. '''
-from __future__ import absolute_import
import pytest
from psyclone.psyir.nodes import Return, Routine, Container
diff --git a/src/psyclone/tests/psyir/frontend/fparser2_comment_test.py b/src/psyclone/tests/psyir/frontend/fparser2_comment_test.py
index 906cee3d73..e343041b7c 100644
--- a/src/psyclone/tests/psyir/frontend/fparser2_comment_test.py
+++ b/src/psyclone/tests/psyir/frontend/fparser2_comment_test.py
@@ -343,8 +343,10 @@ def test_comments_and_codeblocks(last_comments_as_codeblocks):
! Comment on assignment 'a = 1'
! and second line
a = 1
+
! Comment on call 'call test_sub()'
call test_sub()
+
! Comment on if block 'if (a == 1) then'
if (a == 1) then
! Comment on assignment 'a = 2'
@@ -361,10 +363,12 @@ def test_comments_and_codeblocks(last_comments_as_codeblocks):
! Comment on 'end if' => CodeBlock
end if
end if ! Inline comment on 'end if'
+
! Comment on loop 'do i = 1, 10'
do i = 1, 10, 1
! Comment on assignment 'a = 5'
a = 5
+
! Comment on loop 'do j = 1, 10'
do j = 1, 10, 1
! Comment on assignment 'a = 6'
@@ -373,6 +377,7 @@ def test_comments_and_codeblocks(last_comments_as_codeblocks):
enddo ! Inline comment on 'end do j = 1, 10'
! Comment at end of loop on i => CodeBlock
enddo ! Inline comment on 'end do i = 1, 10'
+
! Comment on 'do while (a < 10)'
do while (a < 10)
! Comment on assignment 'a = 7'
@@ -418,8 +423,10 @@ def test_comments_and_codeblocks(last_comments_as_codeblocks):
! Comment on assignment 'a = 1'
! and second line
a = 1
+
! Comment on call 'call test_sub()'
call test_sub()
+
! Comment on if block 'if (a == 1) then'
if (a == 1) then
! Comment on assignment 'a = 2'
@@ -433,16 +440,19 @@ def test_comments_and_codeblocks(last_comments_as_codeblocks):
a = 4
end if
end if ! Inline comment on 'end if'
+
! Comment on loop 'do i = 1, 10'
do i = 1, 10, 1
! Comment on assignment 'a = 5'
a = 5
+
! Comment on loop 'do j = 1, 10'
do j = 1, 10, 1
! Comment on assignment 'a = 6'
a = 6
enddo ! Inline comment on 'end do j = 1, 10'
enddo ! Inline comment on 'end do i = 1, 10'
+
! Comment on 'do while (a < 10)'
do while (a < 10)
! Comment on assignment 'a = 7'
diff --git a/src/psyclone/tests/psyir/frontend/fparser2_format_stmt_test.py b/src/psyclone/tests/psyir/frontend/fparser2_format_stmt_test.py
index 8c86430a0c..99ebe002bb 100644
--- a/src/psyclone/tests/psyir/frontend/fparser2_format_stmt_test.py
+++ b/src/psyclone/tests/psyir/frontend/fparser2_format_stmt_test.py
@@ -36,7 +36,6 @@
'''Module containing pytest tests for the handling of labelled format
statements.'''
-from __future__ import absolute_import
from fparser.two import Fortran2003
from psyclone.psyir.nodes import Container, Routine, CodeBlock
diff --git a/src/psyclone/tests/psyir/frontend/fparser2_main_program_handler_test.py b/src/psyclone/tests/psyir/frontend/fparser2_main_program_handler_test.py
index b9dbb951b9..5b75a7206c 100644
--- a/src/psyclone/tests/psyir/frontend/fparser2_main_program_handler_test.py
+++ b/src/psyclone/tests/psyir/frontend/fparser2_main_program_handler_test.py
@@ -38,7 +38,6 @@
of the fparser2 Main_Program construct to PSyIR.
'''
-from __future__ import absolute_import
import pytest
from fparser.common.readfortran import FortranStringReader
diff --git a/src/psyclone/tests/psyir/frontend/fparser2_program_handler_test.py b/src/psyclone/tests/psyir/frontend/fparser2_program_handler_test.py
index fffc577def..ea5b84eca7 100644
--- a/src/psyclone/tests/psyir/frontend/fparser2_program_handler_test.py
+++ b/src/psyclone/tests/psyir/frontend/fparser2_program_handler_test.py
@@ -38,8 +38,6 @@
the class Fparser2Reader. This handler deals with the translation of
the fparser2 Program construct to PSyIR.'''
-from __future__ import absolute_import
-
import pytest
from fparser.common.readfortran import FortranStringReader
diff --git a/src/psyclone/tests/psyir/frontend/fparser2_subscript_triplet_handler_test.py b/src/psyclone/tests/psyir/frontend/fparser2_subscript_triplet_handler_test.py
index d691ab253f..86ee69e5cb 100644
--- a/src/psyclone/tests/psyir/frontend/fparser2_subscript_triplet_handler_test.py
+++ b/src/psyclone/tests/psyir/frontend/fparser2_subscript_triplet_handler_test.py
@@ -38,7 +38,6 @@
PSyIR front-end. Also tests the associated utility used to create arguments
for the LBOUND and UBOUND array-query operations. '''
-from __future__ import absolute_import
import pytest
from fparser.two.Fortran2003 import Execution_Part
from fparser.common.readfortran import FortranStringReader
diff --git a/src/psyclone/tests/psyir/nodes/acc_directives_test.py b/src/psyclone/tests/psyir/nodes/acc_directives_test.py
index a9bcca8eb3..db9d9c30e4 100644
--- a/src/psyclone/tests/psyir/nodes/acc_directives_test.py
+++ b/src/psyclone/tests/psyir/nodes/acc_directives_test.py
@@ -44,7 +44,6 @@
from psyclone.core import Signature
from psyclone.errors import GenerationError
-from psyclone.f2pygen import ModuleGen
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSyFactory
from psyclone.psyir.nodes import (ACCKernelsDirective,
@@ -54,6 +53,7 @@
ACCRoutineDirective,
ACCUpdateDirective,
ACCAtomicDirective,
+ ACCDirective,
Assignment,
Literal,
Reference,
@@ -115,8 +115,8 @@ def test_accregiondir_signatures():
# Class ACCEnterDataDirective start
-# (1/4) Method gen_code
-def test_accenterdatadirective_gencode_1():
+# (1/4) Method lower_to_language_level
+def test_accenterdatadirective_lowering_1():
'''Test that an OpenACC Enter Data directive, when added to a schedule
with a single loop, raises the expected exception as there is no
following OpenACC Parallel or OpenACC Kernels directive as at
@@ -129,24 +129,16 @@ def test_accenterdatadirective_gencode_1():
psy = PSyFactory(api=API, distributed_memory=False).create(info)
sched = psy.invokes.get('invoke_0_testkern_type').schedule
acc_enter_trans.apply(sched)
+ directive = sched.walk(ACCDirective)[0].lower_to_language_level()
with pytest.raises(GenerationError) as excinfo:
- str(psy.gen)
- assert ("ACCEnterData directive did not find any data to copyin. Perhaps "
- "there are no ACCParallel or ACCKernels directives within the "
- "region?" in str(excinfo.value))
-
- # Test that the same error is produced by the begin_string() which is used
- # by the PSyIR backend
- sched[0].lower_to_language_level()
- with pytest.raises(GenerationError) as excinfo:
- sched[0].begin_string()
+ directive.begin_string()
assert ("ACCEnterData directive did not find any data to copyin. Perhaps "
"there are no ACCParallel or ACCKernels directives within the "
"region?" in str(excinfo.value))
-# (2/4) Method gen_code
-def test_accenterdatadirective_gencode_2():
+# (2/4) Method lower_to_language_level
+def test_accenterdatadirective_lowering_2():
'''Test that an OpenACC Enter Data directive, when added to a schedule
with multiple loops, raises the expected exception, as there is no
following OpenACC Parallel or OpenACCKernels directive and at
@@ -166,9 +158,9 @@ def test_accenterdatadirective_gencode_2():
"region?" in str(excinfo.value))
-# (3/4) Method gen_code
+# (3/4) Method lower_to_language_level
@pytest.mark.parametrize("trans", [ACCParallelTrans, ACCKernelsTrans])
-def test_accenterdatadirective_gencode_3(trans):
+def test_accenterdatadirective_lowering_3(trans):
'''Test that an OpenACC Enter Data directive, when added to a schedule
with a single loop, produces the expected code (there should be
"copy in" data as there is a following OpenACC parallel or kernels
@@ -185,18 +177,18 @@ def test_accenterdatadirective_gencode_3(trans):
acc_enter_trans.apply(sched)
code = str(psy.gen)
assert (
- " !$acc enter data copyin(f1_data,f2_data,m1_data,m2_data,"
+ " !$acc enter data copyin(f1_data,f2_data,m1_data,m2_data,"
"map_w1,map_w2,map_w3,ndf_w1,ndf_w2,ndf_w3,nlayers_f1,"
"undf_w1,undf_w2,undf_w3)\n" in code)
-# (4/4) Method gen_code
+# (4/4) Method lower_to_language_level
@pytest.mark.parametrize("trans1,trans2",
[(ACCParallelTrans, ACCParallelTrans),
(ACCParallelTrans, ACCKernelsTrans),
(ACCKernelsTrans, ACCParallelTrans),
(ACCKernelsTrans, ACCKernelsTrans)])
-def test_accenterdatadirective_gencode_4(trans1, trans2):
+def test_accenterdatadirective_lowering_4(trans1, trans2):
'''Test that an OpenACC Enter Data directive, when added to a schedule
with multiple loops and multiple OpenACC parallel and/or Kernel
directives, produces the expected code (when the same argument is
@@ -216,7 +208,7 @@ def test_accenterdatadirective_gencode_4(trans1, trans2):
acc_enter_trans.apply(sched)
code = str(psy.gen)
assert (
- " !$acc enter data copyin(f1_data,f2_data,f3_data,m1_data,"
+ " !$acc enter data copyin(f1_data,f2_data,f3_data,m1_data,"
"m2_data,map_w1,map_w2,map_w3,ndf_w1,ndf_w2,ndf_w3,"
"nlayers_f1,undf_w1,undf_w2,undf_w3)\n" in code)
@@ -400,11 +392,11 @@ def test_acckernelsdirective_init():
assert not directive._default_present
-# (1/1) Method gen_code
+# (1/1) Method lower_to_language_level
@pytest.mark.parametrize("default_present", [False, True])
-def test_acckernelsdirective_gencode(default_present):
- '''Check that the gen_code method in the ACCKernelsDirective class
- generates the expected code. Use the lfric API.
+def test_acckernelsdirective_lowering(default_present):
+ '''Check that the lower_to_language_level method in the ACCKernelsDirective
+ class generates the expected code. Use the lfric API.
'''
API = "lfric"
@@ -420,11 +412,11 @@ def test_acckernelsdirective_gencode(default_present):
if default_present:
string = " default(present)"
assert (
- f" !$acc kernels{string}\n"
- f" DO cell = loop0_start, loop0_stop, 1\n" in code)
+ f" !$acc kernels{string}\n"
+ f" do cell = loop0_start, loop0_stop, 1\n" in code)
assert (
- " END DO\n"
- " !$acc end kernels\n" in code)
+ " enddo\n"
+ " !$acc end kernels\n" in code)
def test_acckerneldirective_equality():
@@ -453,10 +445,6 @@ def test_acc_routine_directive_constructor_and_strings():
assert target.begin_string() == "acc routine seq"
assert str(target) == "ACCRoutineDirective[]"
- temporary_module = ModuleGen("test")
- target.gen_code(temporary_module)
- assert "!$acc routine seq\n" in str(temporary_module.root)
-
target2 = ACCRoutineDirective("VECTOR")
assert target2.parallelism == "vector"
assert target2.begin_string() == "acc routine vector"
diff --git a/src/psyclone/tests/psyir/nodes/array_of_structures_member_test.py b/src/psyclone/tests/psyir/nodes/array_of_structures_member_test.py
index 80aa0abec7..a9ed229744 100644
--- a/src/psyclone/tests/psyir/nodes/array_of_structures_member_test.py
+++ b/src/psyclone/tests/psyir/nodes/array_of_structures_member_test.py
@@ -37,7 +37,6 @@
''' This module contains pytest tests for the ArrayOfStructuresMember
class. '''
-from __future__ import absolute_import
import pytest
from psyclone.psyir import symbols, nodes
from psyclone.errors import GenerationError, InternalError
diff --git a/src/psyclone/tests/psyir/nodes/assignment_test.py b/src/psyclone/tests/psyir/nodes/assignment_test.py
index b23f633a2e..7588d68f54 100644
--- a/src/psyclone/tests/psyir/nodes/assignment_test.py
+++ b/src/psyclone/tests/psyir/nodes/assignment_test.py
@@ -41,7 +41,6 @@
import pytest
from psyclone.errors import InternalError, GenerationError
-from psyclone.f2pygen import ModuleGen
from psyclone.psyir.backend.fortran import FortranWriter
from psyclone.psyir.nodes import (
Assignment, Reference, Literal, ArrayReference, Range, StructureReference,
@@ -323,26 +322,6 @@ def test_is_not_array_assignment():
assert assignment.is_array_assignment is False
-def test_assignment_gen_code():
- '''Test that the gen_code method in the Assignment class produces the
- expected Fortran code.
-
- TODO #1648: This is just needed for coverage of the gen_code, that in turn
- is needed because another test (profiling_node tests) uses it. But gen_code
- is deprecated and this test should be removed when the gen_code is not used
- in any other test.
-
- '''
- lhs = Reference(DataSymbol("tmp", REAL_SINGLE_TYPE))
- rhs = Literal("0.0", REAL_SINGLE_TYPE)
- assignment = Assignment.create(lhs, rhs)
- check_links(assignment, [lhs, rhs])
- module = ModuleGen("test")
- assignment.gen_code(module)
- code = str(module.root)
- assert "tmp = 0.0\n" in code
-
-
def test_pointer_assignment():
''' Test that pointer assignments work as expected '''
lhs = Reference(Symbol("var1"))
diff --git a/src/psyclone/tests/psyir/nodes/directive_test.py b/src/psyclone/tests/psyir/nodes/directive_test.py
index 30b077f8ed..95c18db703 100644
--- a/src/psyclone/tests/psyir/nodes/directive_test.py
+++ b/src/psyclone/tests/psyir/nodes/directive_test.py
@@ -39,16 +39,15 @@
''' Performs py.test tests on the PSyIR Directive node. '''
import os
-import pytest
from collections import OrderedDict
+import pytest
-from psyclone import f2pygen
from psyclone.core import Signature
from psyclone.errors import GenerationError
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSyFactory
from psyclone.psyir import nodes
-from psyclone.psyir.symbols import DataSymbol, INTEGER_TYPE
+from psyclone.psyir.symbols import INTEGER_TYPE
from psyclone.transformations import ACCDataTrans, DynamoOMPParallelLoopTrans
BASE_PATH = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(
@@ -192,30 +191,6 @@ def test_regiondirective_children_validation():
"format is: 'Schedule'." in str(excinfo.value))
-@pytest.mark.usefixtures("dist_mem")
-def test_regiondirective_gen_post_region_code():
- '''Test that the RegionDirective.gen_post_region_code() method does
- nothing for language-level PSyIR.
-
- TODO #1648 - this can be removed when the gen_post_region_code() method is
- removed.'''
- temporary_module = f2pygen.ModuleGen("test")
- subroutine = nodes.Routine.create("testsub")
- directive = nodes.RegionDirective()
- sym = subroutine.symbol_table.new_symbol(
- "i", symbol_type=DataSymbol, datatype=INTEGER_TYPE)
- loop = nodes.Loop.create(sym,
- nodes.Literal("1", INTEGER_TYPE),
- nodes.Literal("10", INTEGER_TYPE),
- nodes.Literal("1", INTEGER_TYPE), [])
- directive.dir_body.addchild(loop)
- subroutine.addchild(directive)
- directive.gen_post_region_code(temporary_module)
- # No nodes should have been added to the tree.
- assert len(temporary_module.children) == 1
- assert isinstance(temporary_module.children[0], f2pygen.ImplicitNoneGen)
-
-
def test_standalonedirective_children_validation():
'''Test that children cannot be added to StandaloneDirective.'''
cdir = nodes.StandaloneDirective()
diff --git a/src/psyclone/tests/psyir/nodes/extract_node_test.py b/src/psyclone/tests/psyir/nodes/extract_node_test.py
index d559a71c6e..9e9b9ce655 100644
--- a/src/psyclone/tests/psyir/nodes/extract_node_test.py
+++ b/src/psyclone/tests/psyir/nodes/extract_node_test.py
@@ -68,8 +68,8 @@ def test_extract_node_constructor():
assert en.extract_body is schedule
-def test_extract_node_gen_code():
- '''Test the ExtractNode's gen_code function if there is no ReadWriteInfo
+def test_extract_node_lowering(fortran_writer):
+ '''Test the ExtractNode's lowering function if there is no ReadWriteInfo
object specified in the options. Since the transformations will always
do that, we need to manually insert the ExtractNode into a schedule:
@@ -83,29 +83,29 @@ def test_extract_node_gen_code():
en.addchild(Schedule(children=[loop]))
invoke.schedule.addchild(en)
- code = str(invoke.gen())
+ code = fortran_writer(invoke.schedule)
expected = [
- 'CALL psydata%PreStart("single_invoke_psy", '
+ 'CALL psydata % PreStart("single_invoke_psy", '
'"invoke_important_invoke-testkern_code-r0", 17, 2)',
- 'CALL psydata%PreDeclareVariable("a", a)',
- 'CALL psydata%PreDeclareVariable("f1_data", f1_data)',
- 'CALL psydata%PreDeclareVariable("f2_data", f2_data)',
- 'CALL psydata%PreDeclareVariable("loop0_start", loop0_start)',
- 'CALL psydata%PreDeclareVariable("loop0_stop", loop0_stop)',
- 'CALL psydata%PreDeclareVariable("m1_data", m1_data)',
- 'CALL psydata%PreDeclareVariable("m2_data", m2_data)',
- 'CALL psydata%PreDeclareVariable("map_w1", map_w1)',
- 'CALL psydata%PreDeclareVariable("map_w2", map_w2)',
- 'CALL psydata%PreDeclareVariable("map_w3", map_w3)',
- 'CALL psydata%PreDeclareVariable("ndf_w1", ndf_w1)',
- 'CALL psydata%PreDeclareVariable("ndf_w2", ndf_w2)',
- 'CALL psydata%PreDeclareVariable("ndf_w3", ndf_w3)',
- 'CALL psydata%PreDeclareVariable("nlayers_f1", nlayers_f1)',
- 'CALL psydata%PreDeclareVariable("undf_w1", undf_w1)',
- 'CALL psydata%PreDeclareVariable("undf_w2", undf_w2)',
- 'CALL psydata%PreDeclareVariable("undf_w3", undf_w3)',
- 'CALL psydata%PreDeclareVariable("cell_post", cell)',
- 'CALL psydata%PreDeclareVariable("f1_data_post", f1_data)']
+ 'CALL psydata % PreDeclareVariable("a", a)',
+ 'CALL psydata % PreDeclareVariable("f1_data", f1_data)',
+ 'CALL psydata % PreDeclareVariable("f2_data", f2_data)',
+ 'CALL psydata % PreDeclareVariable("loop0_start", loop0_start)',
+ 'CALL psydata % PreDeclareVariable("loop0_stop", loop0_stop)',
+ 'CALL psydata % PreDeclareVariable("m1_data", m1_data)',
+ 'CALL psydata % PreDeclareVariable("m2_data", m2_data)',
+ 'CALL psydata % PreDeclareVariable("map_w1", map_w1)',
+ 'CALL psydata % PreDeclareVariable("map_w2", map_w2)',
+ 'CALL psydata % PreDeclareVariable("map_w3", map_w3)',
+ 'CALL psydata % PreDeclareVariable("ndf_w1", ndf_w1)',
+ 'CALL psydata % PreDeclareVariable("ndf_w2", ndf_w2)',
+ 'CALL psydata % PreDeclareVariable("ndf_w3", ndf_w3)',
+ 'CALL psydata % PreDeclareVariable("nlayers_f1", nlayers_f1)',
+ 'CALL psydata % PreDeclareVariable("undf_w1", undf_w1)',
+ 'CALL psydata % PreDeclareVariable("undf_w2", undf_w2)',
+ 'CALL psydata % PreDeclareVariable("undf_w3", undf_w3)',
+ 'CALL psydata % PreDeclareVariable("cell_post", cell)',
+ 'CALL psydata % PreDeclareVariable("f1_data_post", f1_data)']
for line in expected:
assert line in code
@@ -165,51 +165,51 @@ def test_extract_node_lower_to_language_level():
code = str(psy.gen)
output = (
- """CALL extract_psy_data % """
- """PreStart("psy_single_invoke_three_kernels", "invoke_0-compute_cu_"""
- """code-r0", 9, 3)
- CALL extract_psy_data % PreDeclareVariable("cu_fld%internal%xstart", """
- """cu_fld % internal % xstart)
- CALL extract_psy_data % PreDeclareVariable("cu_fld%internal%xstop", """
- """cu_fld % internal % xstop)
- CALL extract_psy_data % PreDeclareVariable("cu_fld%internal%ystart", """
- """cu_fld % internal % ystart)
- CALL extract_psy_data % PreDeclareVariable("cu_fld%internal%ystop", """
- """cu_fld % internal % ystop)
- CALL extract_psy_data % PreDeclareVariable("p_fld", p_fld)
- CALL extract_psy_data % PreDeclareVariable("u_fld", u_fld)
- CALL extract_psy_data % PreDeclareVariable("cu_fld", cu_fld)
- CALL extract_psy_data % PreDeclareVariable("i", i)
- CALL extract_psy_data % PreDeclareVariable("j", j)
- CALL extract_psy_data % PreDeclareVariable("cu_fld_post", cu_fld)
- CALL extract_psy_data % PreDeclareVariable("i_post", i)
- CALL extract_psy_data % PreDeclareVariable("j_post", j)
- CALL extract_psy_data % PreEndDeclaration
- CALL extract_psy_data % ProvideVariable("cu_fld%internal%xstart", """
- """cu_fld % internal % xstart)
- CALL extract_psy_data % ProvideVariable("cu_fld%internal%xstop", """
- """cu_fld % internal % xstop)
- CALL extract_psy_data % ProvideVariable("cu_fld%internal%ystart", """
- """cu_fld % internal % ystart)
- CALL extract_psy_data % ProvideVariable("cu_fld%internal%ystop", """
- """cu_fld % internal % ystop)
- CALL extract_psy_data % ProvideVariable("p_fld", p_fld)
- CALL extract_psy_data % ProvideVariable("u_fld", u_fld)
- CALL extract_psy_data % ProvideVariable("cu_fld", cu_fld)
- CALL extract_psy_data % ProvideVariable("i", i)
- CALL extract_psy_data % ProvideVariable("j", j)
- CALL extract_psy_data % PreEnd
- DO j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1
- DO i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1
- CALL compute_cu_code(i, j, cu_fld%data, p_fld%data, u_fld%data)
- END DO
- END DO
- CALL extract_psy_data % PostStart
- CALL extract_psy_data % ProvideVariable("cu_fld_post", cu_fld)
- CALL extract_psy_data % ProvideVariable("i_post", i)
- CALL extract_psy_data % ProvideVariable("j_post", j)
- CALL extract_psy_data % PostEnd
- """)
+ """CALL extract_psy_data % """
+ """PreStart("psy_single_invoke_three_kernels", "invoke_0-compute_cu_"""
+ """code-r0", 9, 3)
+ CALL extract_psy_data % PreDeclareVariable("cu_fld%internal%xstart", """
+ """cu_fld % internal % xstart)
+ CALL extract_psy_data % PreDeclareVariable("cu_fld%internal%xstop", """
+ """cu_fld % internal % xstop)
+ CALL extract_psy_data % PreDeclareVariable("cu_fld%internal%ystart", """
+ """cu_fld % internal % ystart)
+ CALL extract_psy_data % PreDeclareVariable("cu_fld%internal%ystop", """
+ """cu_fld % internal % ystop)
+ CALL extract_psy_data % PreDeclareVariable("p_fld", p_fld)
+ CALL extract_psy_data % PreDeclareVariable("u_fld", u_fld)
+ CALL extract_psy_data % PreDeclareVariable("cu_fld", cu_fld)
+ CALL extract_psy_data % PreDeclareVariable("i", i)
+ CALL extract_psy_data % PreDeclareVariable("j", j)
+ CALL extract_psy_data % PreDeclareVariable("cu_fld_post", cu_fld)
+ CALL extract_psy_data % PreDeclareVariable("i_post", i)
+ CALL extract_psy_data % PreDeclareVariable("j_post", j)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("cu_fld%internal%xstart", """
+ """cu_fld % internal % xstart)
+ CALL extract_psy_data % ProvideVariable("cu_fld%internal%xstop", """
+ """cu_fld % internal % xstop)
+ CALL extract_psy_data % ProvideVariable("cu_fld%internal%ystart", """
+ """cu_fld % internal % ystart)
+ CALL extract_psy_data % ProvideVariable("cu_fld%internal%ystop", """
+ """cu_fld % internal % ystop)
+ CALL extract_psy_data % ProvideVariable("p_fld", p_fld)
+ CALL extract_psy_data % ProvideVariable("u_fld", u_fld)
+ CALL extract_psy_data % ProvideVariable("cu_fld", cu_fld)
+ CALL extract_psy_data % ProvideVariable("i", i)
+ CALL extract_psy_data % ProvideVariable("j", j)
+ CALL extract_psy_data % PreEnd
+ do j = cu_fld%internal%ystart, cu_fld%internal%ystop, 1
+ do i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1
+ call compute_cu_code(i, j, cu_fld%data, p_fld%data, u_fld%data)
+ enddo
+ enddo
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("cu_fld_post", cu_fld)
+ CALL extract_psy_data % ProvideVariable("i_post", i)
+ CALL extract_psy_data % ProvideVariable("j_post", j)
+ CALL extract_psy_data % PostEnd
+ """)
assert output in code
@@ -224,60 +224,55 @@ def test_extract_node_gen():
idx=0, dist_mem=False)
etrans.apply(invoke.schedule.children[0])
code = str(psy.gen)
- output = ''' ! ExtractStart
- !
- CALL extract_psy_data%PreStart("single_invoke_psy", \
+ output = '''CALL extract_psy_data % PreStart("single_invoke_psy", \
"invoke_0_testkern_type-testkern_code-r0", 18, 2)
- CALL extract_psy_data%PreDeclareVariable("a", a)
- CALL extract_psy_data%PreDeclareVariable("f1_data", f1_data)
- CALL extract_psy_data%PreDeclareVariable("f2_data", f2_data)
- CALL extract_psy_data%PreDeclareVariable("loop0_start", loop0_start)
- CALL extract_psy_data%PreDeclareVariable("loop0_stop", loop0_stop)
- CALL extract_psy_data%PreDeclareVariable("m1_data", m1_data)
- CALL extract_psy_data%PreDeclareVariable("m2_data", m2_data)
- CALL extract_psy_data%PreDeclareVariable("map_w1", map_w1)
- CALL extract_psy_data%PreDeclareVariable("map_w2", map_w2)
- CALL extract_psy_data%PreDeclareVariable("map_w3", map_w3)
- CALL extract_psy_data%PreDeclareVariable("ndf_w1", ndf_w1)
- CALL extract_psy_data%PreDeclareVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%PreDeclareVariable("ndf_w3", ndf_w3)
- CALL extract_psy_data%PreDeclareVariable("nlayers_f1", nlayers_f1)
- CALL extract_psy_data%PreDeclareVariable("undf_w1", undf_w1)
- CALL extract_psy_data%PreDeclareVariable("undf_w2", undf_w2)
- CALL extract_psy_data%PreDeclareVariable("undf_w3", undf_w3)
- CALL extract_psy_data%PreDeclareVariable("cell", cell)
- CALL extract_psy_data%PreDeclareVariable("cell_post", cell)
- CALL extract_psy_data%PreDeclareVariable("f1_data_post", f1_data)
- CALL extract_psy_data%PreEndDeclaration
- CALL extract_psy_data%ProvideVariable("a", a)
- CALL extract_psy_data%ProvideVariable("f1_data", f1_data)
- CALL extract_psy_data%ProvideVariable("f2_data", f2_data)
- CALL extract_psy_data%ProvideVariable("loop0_start", loop0_start)
- CALL extract_psy_data%ProvideVariable("loop0_stop", loop0_stop)
- CALL extract_psy_data%ProvideVariable("m1_data", m1_data)
- CALL extract_psy_data%ProvideVariable("m2_data", m2_data)
- CALL extract_psy_data%ProvideVariable("map_w1", map_w1)
- CALL extract_psy_data%ProvideVariable("map_w2", map_w2)
- CALL extract_psy_data%ProvideVariable("map_w3", map_w3)
- CALL extract_psy_data%ProvideVariable("ndf_w1", ndf_w1)
- CALL extract_psy_data%ProvideVariable("ndf_w2", ndf_w2)
- CALL extract_psy_data%ProvideVariable("ndf_w3", ndf_w3)
- CALL extract_psy_data%ProvideVariable("nlayers_f1", nlayers_f1)
- CALL extract_psy_data%ProvideVariable("undf_w1", undf_w1)
- CALL extract_psy_data%ProvideVariable("undf_w2", undf_w2)
- CALL extract_psy_data%ProvideVariable("undf_w3", undf_w3)
- CALL extract_psy_data%ProvideVariable("cell", cell)
- CALL extract_psy_data%PreEnd
- DO cell = loop0_start, loop0_stop, 1
- CALL testkern_code(nlayers_f1, a, f1_data, f2_data, ''' + \
- "m1_data, m2_data, ndf_w1, undf_w1, " + \
- "map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, " + \
- '''undf_w3, map_w3(:,cell))
- END DO
- CALL extract_psy_data%PostStart
- CALL extract_psy_data%ProvideVariable("cell_post", cell)
- CALL extract_psy_data%ProvideVariable("f1_data_post", f1_data)
- CALL extract_psy_data%PostEnd
- !
- ! ExtractEnd'''
+ CALL extract_psy_data % PreDeclareVariable("a", a)
+ CALL extract_psy_data % PreDeclareVariable("f1_data", f1_data)
+ CALL extract_psy_data % PreDeclareVariable("f2_data", f2_data)
+ CALL extract_psy_data % PreDeclareVariable("loop0_start", loop0_start)
+ CALL extract_psy_data % PreDeclareVariable("loop0_stop", loop0_stop)
+ CALL extract_psy_data % PreDeclareVariable("m1_data", m1_data)
+ CALL extract_psy_data % PreDeclareVariable("m2_data", m2_data)
+ CALL extract_psy_data % PreDeclareVariable("map_w1", map_w1)
+ CALL extract_psy_data % PreDeclareVariable("map_w2", map_w2)
+ CALL extract_psy_data % PreDeclareVariable("map_w3", map_w3)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w1", ndf_w1)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % PreDeclareVariable("ndf_w3", ndf_w3)
+ CALL extract_psy_data % PreDeclareVariable("nlayers_f1", nlayers_f1)
+ CALL extract_psy_data % PreDeclareVariable("undf_w1", undf_w1)
+ CALL extract_psy_data % PreDeclareVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % PreDeclareVariable("undf_w3", undf_w3)
+ CALL extract_psy_data % PreDeclareVariable("cell", cell)
+ CALL extract_psy_data % PreDeclareVariable("cell_post", cell)
+ CALL extract_psy_data % PreDeclareVariable("f1_data_post", f1_data)
+ CALL extract_psy_data % PreEndDeclaration
+ CALL extract_psy_data % ProvideVariable("a", a)
+ CALL extract_psy_data % ProvideVariable("f1_data", f1_data)
+ CALL extract_psy_data % ProvideVariable("f2_data", f2_data)
+ CALL extract_psy_data % ProvideVariable("loop0_start", loop0_start)
+ CALL extract_psy_data % ProvideVariable("loop0_stop", loop0_stop)
+ CALL extract_psy_data % ProvideVariable("m1_data", m1_data)
+ CALL extract_psy_data % ProvideVariable("m2_data", m2_data)
+ CALL extract_psy_data % ProvideVariable("map_w1", map_w1)
+ CALL extract_psy_data % ProvideVariable("map_w2", map_w2)
+ CALL extract_psy_data % ProvideVariable("map_w3", map_w3)
+ CALL extract_psy_data % ProvideVariable("ndf_w1", ndf_w1)
+ CALL extract_psy_data % ProvideVariable("ndf_w2", ndf_w2)
+ CALL extract_psy_data % ProvideVariable("ndf_w3", ndf_w3)
+ CALL extract_psy_data % ProvideVariable("nlayers_f1", nlayers_f1)
+ CALL extract_psy_data % ProvideVariable("undf_w1", undf_w1)
+ CALL extract_psy_data % ProvideVariable("undf_w2", undf_w2)
+ CALL extract_psy_data % ProvideVariable("undf_w3", undf_w3)
+ CALL extract_psy_data % ProvideVariable("cell", cell)
+ CALL extract_psy_data % PreEnd
+ do cell = loop0_start, loop0_stop, 1
+ call testkern_code(nlayers_f1, a, f1_data, f2_data, m1_data, m2_data, \
+ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, \
+undf_w3, map_w3(:,cell))
+ enddo
+ CALL extract_psy_data % PostStart
+ CALL extract_psy_data % ProvideVariable("cell_post", cell)
+ CALL extract_psy_data % ProvideVariable("f1_data_post", f1_data)
+ CALL extract_psy_data % PostEnd'''
assert output in code
diff --git a/src/psyclone/tests/psyir/nodes/if_block_test.py b/src/psyclone/tests/psyir/nodes/if_block_test.py
index 4f73958dbb..68f949d849 100644
--- a/src/psyclone/tests/psyir/nodes/if_block_test.py
+++ b/src/psyclone/tests/psyir/nodes/if_block_test.py
@@ -38,7 +38,6 @@
''' Performs py.test tests on the IfBlock PSyIR node. '''
-from __future__ import absolute_import
import pytest
from psyclone.psyir.nodes import IfBlock, Literal, Reference, Schedule, \
Return, Assignment
diff --git a/src/psyclone/tests/psyir/nodes/kernel_schedule_test.py b/src/psyclone/tests/psyir/nodes/kernel_schedule_test.py
index c02091cb04..a43d612152 100644
--- a/src/psyclone/tests/psyir/nodes/kernel_schedule_test.py
+++ b/src/psyclone/tests/psyir/nodes/kernel_schedule_test.py
@@ -37,7 +37,6 @@
''' Performs py.test tests on the KernelSchedule class. '''
-from __future__ import absolute_import
from psyclone.psyir.nodes import Assignment, Reference, Literal, \
KernelSchedule, Container
from psyclone.psyir.symbols import SymbolTable, DataSymbol, REAL_TYPE, \
diff --git a/src/psyclone/tests/psyir/nodes/literal_test.py b/src/psyclone/tests/psyir/nodes/literal_test.py
index 0fc78984e5..02a9156b2a 100644
--- a/src/psyclone/tests/psyir/nodes/literal_test.py
+++ b/src/psyclone/tests/psyir/nodes/literal_test.py
@@ -144,7 +144,7 @@ def test_literal_init_invalid_3(value):
with pytest.raises(ValueError) as err:
Literal(value, INTEGER_SINGLE_TYPE)
assert (f"A scalar integer literal value must conform to the "
- f"supported format ('(([+-]?[0-9]+)|(NOT_INITIALISED))') "
+ f"supported format ('([+-]?[0-9]+)') "
f"but found '{value}'." in str(err.value))
diff --git a/src/psyclone/tests/psyir/nodes/loop_test.py b/src/psyclone/tests/psyir/nodes/loop_test.py
index 725d5e0393..e859ad39dd 100644
--- a/src/psyclone/tests/psyir/nodes/loop_test.py
+++ b/src/psyclone/tests/psyir/nodes/loop_test.py
@@ -39,11 +39,8 @@
''' Performs py.test tests on the Loop PSyIR node. '''
-import os
import pytest
from psyclone.errors import InternalError, GenerationError
-from psyclone.parse.algorithm import parse
-from psyclone.psyGen import PSyFactory
from psyclone.psyir.nodes import (
Assignment, Loop, Literal, Schedule, Return, Reference, Routine)
from psyclone.psyir.symbols import (
@@ -138,15 +135,6 @@ def test_loop_navigation_properties():
with pytest.raises(InternalError) as err:
_ = loop.loop_body
assert error_str in str(err.value)
- with pytest.raises(InternalError) as err:
- loop.start_expr = Literal("NOT_INITIALISED", INTEGER_SINGLE_TYPE)
- assert error_str in str(err.value)
- with pytest.raises(InternalError) as err:
- loop.stop_expr = Literal("NOT_INITIALISED", INTEGER_SINGLE_TYPE)
- assert error_str in str(err.value)
- with pytest.raises(InternalError) as err:
- loop.step_expr = Literal("NOT_INITIALISED", INTEGER_SINGLE_TYPE)
- assert error_str in str(err.value)
# Check that Getters properties work
loop.addchild(Schedule(parent=loop))
@@ -252,30 +240,6 @@ def test_loop_independent_iterations():
assert "variable 'tmp' is only written once" in str(msgs[0])
-def test_loop_gen_code():
- ''' Check that the Loop gen_code method prints the proper loop '''
- base_path = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(
- os.path.abspath(__file__)))), "test_files", "dynamo0p3")
- _, invoke_info = parse(os.path.join(base_path,
- "1.0.1_single_named_invoke.f90"),
- api="lfric")
- psy = PSyFactory("lfric", distributed_memory=True).create(invoke_info)
-
- # By default LFRicLoop has step = 1 and it is not printed in the Fortran DO
- gen = str(psy.gen)
- assert "loop0_start = 1" in gen
- assert "loop0_stop = mesh%get_last_halo_cell(1)" in gen
- assert "DO cell = loop0_start, loop0_stop" in gen
-
- # Change step to 2
- loop = psy.invokes.get('invoke_important_invoke').schedule[4]
- loop.step_expr = Literal("2", INTEGER_SINGLE_TYPE)
-
- # Now it is printed in the Fortran DO with the expression ",2" at the end
- gen = str(psy.gen)
- assert "DO cell = loop0_start, loop0_stop, 2" in gen
-
-
def test_invalid_loop_annotations():
''' Check that the Loop constructor validates any supplied annotations. '''
# Check that we can have 'was_where' on its own
diff --git a/src/psyclone/tests/psyir/nodes/omp_directives_test.py b/src/psyclone/tests/psyir/nodes/omp_directives_test.py
index 6219634d84..5f699bc009 100644
--- a/src/psyclone/tests/psyir/nodes/omp_directives_test.py
+++ b/src/psyclone/tests/psyir/nodes/omp_directives_test.py
@@ -42,7 +42,6 @@
import os
import pytest
from psyclone.errors import UnresolvedDependencyError
-from psyclone.f2pygen import ModuleGen
from psyclone.parse.algorithm import parse
from psyclone.psyGen import PSyFactory
from psyclone.psyir import nodes
@@ -165,118 +164,6 @@ def test_ompparallel_lowering(fortran_reader, monkeypatch):
"found: 'a' which is not in a depend clause." in str(err.value))
-def test_ompparallel_gen_code_clauses(monkeypatch):
- ''' Check that the OMP Parallel region clauses are generated
- appropriately. '''
-
- # Check with an LFRic kernel, the cell variable must be private
- _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke_w3.f90"),
- api="lfric")
- psy = PSyFactory("lfric", distributed_memory=False).create(invoke_info)
- tree = psy.invokes.invoke_list[0].schedule
- ptrans = OMPParallelTrans()
- tdir = OMPDoDirective()
- loops = tree.walk(Loop)
- loop = loops[0]
- parent = loop.parent
- loop.detach()
- tdir.children[0].addchild(loop)
- parent.addchild(tdir, index=0)
- ptrans.apply(loops[0].parent.parent)
-
- assert isinstance(tree.children[0], OMPParallelDirective)
- pdir = tree.children[0]
- code = str(psy.gen).lower()
- assert len(pdir.children) == 4
- assert "private(cell)" in code
-
- # Check that making a change (add private k variable) after the first
- # time psy.gen is called recomputes the clauses attributes
- new_loop = pdir.children[0].children[0].children[0].children[0].copy()
- routine = pdir.ancestor(Routine)
- routine.symbol_table.add(DataSymbol("k", INTEGER_SINGLE_TYPE))
- # Change the loop variable to j
- jvar = DataSymbol("k", INTEGER_SINGLE_TYPE)
- new_loop.variable = jvar
- tdir2 = OMPDoDirective()
- tdir2.children[0].addchild(new_loop)
- # Add loop
- pdir.children[0].addchild(tdir2)
-
- code = str(psy.gen).lower()
- assert "private(cell,k)" in code
-
- # Monkeypatch a case with private and firstprivate clauses
- monkeypatch.setattr(pdir, "infer_sharing_attributes",
- lambda: ({Symbol("a")}, {Symbol("b")}, None))
-
- code = str(psy.gen).lower()
- assert "private(a)" in code
- assert "firstprivate(b)" in code
-
- # Monkeypatch a case with shared variables that need synchronisation
- monkeypatch.setattr(pdir, "infer_sharing_attributes",
- lambda: ({}, {}, {Symbol("a")}))
- with pytest.raises(GenerationError) as err:
- code = str(psy.gen).lower()
- assert ("OMPParallelDirective.gen_code() does not support symbols that "
- "need synchronisation, but found: ['a']" in str(err.value))
-
-
-def test_omp_paralleldo_clauses_gen_code(monkeypatch):
- ''' Check that the OMP ParallelDo clauses are generated
- appropriately. '''
-
- # Check with an LFRic kernel, the cell variable must be private
- _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke_w3.f90"),
- api="lfric")
- psy = PSyFactory("lfric", distributed_memory=False).create(invoke_info)
- tree = psy.invokes.invoke_list[0].schedule
- ptrans = OMPParallelLoopTrans()
- loops = tree.walk(Loop)
- ptrans.apply(loops[0])
-
- assert isinstance(tree.children[0], OMPParallelDoDirective)
- pdir = tree.children[0]
- code = str(psy.gen).lower()
- assert len(pdir.children) == 2
- assert "private(cell)" in code
- assert "schedule(auto)" in code
- assert "firstprivate" not in code
-
- # Check that making a change (add private k variable) after the first
- # time psy.gen is called recomputes the clauses attributes
- routine = pdir.ancestor(Routine)
- routine.symbol_table.add(DataSymbol("k", INTEGER_SINGLE_TYPE))
- # Change the loop variable to k
- kvar = DataSymbol("k", INTEGER_SINGLE_TYPE)
- pdir.children[0].children[0].variable = kvar
- # Change the schedule to 'none'
- pdir.omp_schedule = "none"
-
- # No 'schedule' clause should now be present on the OMP directive.
- code = str(psy.gen).lower()
- assert "schedule(" not in code
- assert "private(k)" in code
- assert "firstprivate" not in code
-
- # Monkeypatch a case with firstprivate clauses
- monkeypatch.setattr(pdir, "infer_sharing_attributes",
- lambda: ({Symbol("a")}, {Symbol("b")}, None))
-
- code = str(psy.gen).lower()
- assert "private(a)" in code
- assert "firstprivate(b)" in code
-
- # Monkeypatch a case with shared variables that need synchronisation
- monkeypatch.setattr(pdir, "infer_sharing_attributes",
- lambda: ({}, {}, {Symbol("a")}))
- with pytest.raises(GenerationError) as err:
- code = str(psy.gen).lower()
- assert ("OMPParallelDoDirective.gen_code() does not support symbols that "
- "need synchronisation, but found: ['a']" in str(err.value))
-
-
def test_omp_parallel_do_lowering(fortran_reader, monkeypatch):
''' Check that lowering an OMP Parallel Do leaves it with the
appropriate begin_string and clauses for the backend to generate
@@ -1264,27 +1151,6 @@ def test_omp_single_nested_validate_global_constraints(monkeypatch):
"region") in str(excinfo.value)
-@pytest.mark.parametrize("nowait", [False, True])
-def test_omp_single_gencode(nowait):
- '''Check that the gen_code method in the OMPSingleDirective class
- generates the expected code.
- '''
- subroutine = Routine.create("testsub")
- temporary_module = ModuleGen("test")
- parallel = OMPParallelDirective.create()
- single = OMPSingleDirective(nowait=nowait)
- parallel.dir_body.addchild(single)
- subroutine.addchild(parallel)
- parallel.gen_code(temporary_module)
-
- clauses = ""
- if nowait:
- clauses += " nowait"
-
- assert "!$omp single" + clauses + "\n" in str(temporary_module.root)
- assert "!$omp end single\n" in str(temporary_module.root)
-
-
def test_omp_master_strings():
''' Test the begin_string and end_string methods of the OMPMaster
directive '''
@@ -1294,22 +1160,6 @@ def test_omp_master_strings():
assert omp_master.end_string() == "omp end master"
-def test_omp_master_gencode():
- '''Check that the gen_code method in the OMPMasterDirective class
- generates the expected code.
- '''
- subroutine = Routine.create("testsub")
- temporary_module = ModuleGen("test")
- parallel = OMPParallelDirective.create()
- master = OMPMasterDirective()
- parallel.dir_body.addchild(master)
- subroutine.addchild(parallel)
- parallel.gen_code(temporary_module)
-
- assert "!$omp master\n" in str(temporary_module.root)
- assert "!$omp end master\n" in str(temporary_module.root)
-
-
def test_omp_master_validate_global_constraints():
''' Test the validate_global_constraints method of the OMPMaster
directive '''
@@ -1360,21 +1210,6 @@ def test_omptaskwait_strings():
assert taskwait.begin_string() == "omp taskwait"
-def test_omptaskwait_gencode():
- '''Check that the gen_code method in the OMPTaskwaitDirective
- class generates the expected code.
- '''
- subroutine = Routine.create("testsub")
- temporary_module = ModuleGen("test")
- parallel = OMPParallelDirective.create()
- directive = OMPTaskwaitDirective()
- parallel.dir_body.addchild(directive)
- subroutine.addchild(parallel)
- parallel.gen_code(temporary_module)
-
- assert "!$omp taskwait\n" in str(temporary_module.root)
-
-
def test_omp_taskwait_validate_global_constraints():
''' Test the validate_global_constraints method of the OMPTaskwait
directive '''
@@ -1390,6 +1225,9 @@ def test_omp_taskwait_validate_global_constraints():
assert ("OMPTaskwaitDirective must be inside an OMP parallel region but "
"could not find an ancestor OMPParallelDirective node"
in str(excinfo.value))
+ parallel = OMPParallelDirective.create(children=[taskwait.detach()])
+ schedule.addchild(parallel, 0)
+ taskwait.validate_global_constraints()
def test_omp_taskwait_clauses():
@@ -1419,37 +1257,12 @@ def test_omp_taskloop_init():
OMPTaskloopDirective(grainsize=32, num_tasks=32)
assert ("OMPTaskloopDirective must not have both grainsize and "
"numtasks clauses specified.") in str(excinfo.value)
-
-
-@pytest.mark.parametrize("grainsize,num_tasks,nogroup,clauses",
- [(None, None, False, ""),
- (32, None, False, " grainsize(32)"),
- (None, 32, True, " num_tasks(32), nogroup")])
-def test_omp_taskloop_gencode(grainsize, num_tasks, nogroup, clauses):
- '''Check that the gen_code method in the OMPTaskloopDirective
- class generates the expected code.
- '''
- temporary_module = ModuleGen("test")
- subroutine = Routine.create("testsub")
- parallel = OMPParallelDirective.create()
- single = OMPSingleDirective()
- directive = OMPTaskloopDirective(grainsize=grainsize, num_tasks=num_tasks,
- nogroup=nogroup)
- parallel.dir_body.addchild(single)
- single.dir_body.addchild(directive)
- sym = subroutine.symbol_table.new_symbol(
- "i", symbol_type=DataSymbol, datatype=INTEGER_TYPE)
- loop = Loop.create(sym,
- Literal("1", INTEGER_TYPE),
- Literal("10", INTEGER_TYPE),
- Literal("1", INTEGER_TYPE),
- [])
- directive.dir_body.addchild(loop)
- subroutine.addchild(parallel)
- parallel.gen_code(temporary_module)
-
- assert "!$omp taskloop" + clauses + "\n" in str(temporary_module.root)
- assert "!$omp end taskloop\n" in str(temporary_module.root)
+ tl1 = OMPTaskloopDirective(grainsize=32)
+ assert tl1.walk(OMPGrainsizeClause)
+ assert not tl1.walk(OMPNumTasksClause)
+ tl2 = OMPTaskloopDirective(num_tasks=32)
+ assert not tl2.walk(OMPGrainsizeClause)
+ assert tl2.walk(OMPNumTasksClause)
@pytest.mark.parametrize("nogroup", [False, True])
@@ -1530,11 +1343,6 @@ def test_omp_declare_target_directive_constructor_and_strings(monkeypatch):
assert target.begin_string() == "omp declare target"
assert str(target) == "OMPDeclareTargetDirective[]"
- monkeypatch.setattr(target, "validate_global_constraints", lambda: None)
- temporary_module = ModuleGen("test")
- target.gen_code(temporary_module)
- assert "!$omp declare target\n" in str(temporary_module.root)
-
def test_omp_declare_target_directive_validate_global_constraints():
''' Test the OMPDeclareTargetDirective is only valid as the first child
@@ -4759,34 +4567,3 @@ def test_omp_serial_check_dependency_valid_pairing():
assert test_dir._check_dependency_pairing_valid(
array_reference1, array_reference2, None, None
)
-
-
-def test_omptarget_gen_code():
- ''' Check that the OMPTarget gen_code produces the right code '''
- _, invoke_info = parse(os.path.join(BASE_PATH, "1_single_invoke.f90"),
- api="lfric")
- psy = PSyFactory("lfric", distributed_memory=True).create(invoke_info)
- schedule = psy.invokes.invoke_list[0].schedule
- kern = schedule.children[-1]
-
- # Add an OMPTarget and move the kernel inside it
- target = OMPTargetDirective()
- schedule.addchild(target)
- target.dir_body.addchild(kern.detach())
-
- # Check that the "omp target" is produced, and that the set_dirty is
- # generated after it
- code = str(psy.gen)
- assert """
- !$omp target
- DO cell = loop0_start, loop0_stop, 1
- CALL testkern_code(nlayers_f1, a, f1_data, f2_data, m1_data, \
-m2_data, ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), \
-ndf_w3, undf_w3, map_w3(:,cell))
- END DO
- !$omp end target
- !
- ! Set halos dirty/clean for fields modified in the above loop(s)
- !
- CALL f1_proxy%set_dirty()
- """ in code
diff --git a/src/psyclone/tests/psyir/nodes/profile_node_test.py b/src/psyclone/tests/psyir/nodes/profile_node_test.py
index 05522f8251..dc9c7591f4 100644
--- a/src/psyclone/tests/psyir/nodes/profile_node_test.py
+++ b/src/psyclone/tests/psyir/nodes/profile_node_test.py
@@ -193,12 +193,12 @@ def test_lower_to_lang_level_multi_node():
ptree = cblocks[0].get_ast_nodes
code = str(ptree[0]).lower()
assert ("call profile_psy_data % prestart(\"psy_single_invoke_two_"
- "kernels\", \"invoke_0-r0\"" in code)
+ "kernels\", \"invoke_0-compute_cu_code-r0\"" in code)
assert cblocks[0].annotations == ["psy-data-start"]
assert cblocks[1].annotations == []
ptree = cblocks[2].get_ast_nodes
code = str(ptree[0]).lower()
assert ("call profile_psy_data_1 % prestart(\"psy_single_invoke_two_"
- "kernels\", \"invoke_0-r1\"" in code)
+ "kernels\", \"invoke_0-time_smooth_code-r1\"" in code)
assert cblocks[2].annotations == ["psy-data-start"]
assert cblocks[3].annotations == []
diff --git a/src/psyclone/tests/psyir/nodes/psy_data_node_test.py b/src/psyclone/tests/psyir/nodes/psy_data_node_test.py
index cf7564a6d4..da2b8b19f7 100644
--- a/src/psyclone/tests/psyir/nodes/psy_data_node_test.py
+++ b/src/psyclone/tests/psyir/nodes/psy_data_node_test.py
@@ -42,8 +42,6 @@
from psyclone.domain.lfric.transformations import LFRicExtractTrans
from psyclone.errors import InternalError, GenerationError
-from psyclone.f2pygen import ModuleGen
-from psyclone.psyGen import Kern
from psyclone.psyir.nodes import (
CodeBlock, PSyDataNode, Schedule, Return, Routine)
from psyclone.parse import ModuleManager
@@ -52,6 +50,7 @@
from psyclone.psyir.symbols import (
ContainerSymbol, ImportInterface, SymbolTable, DataTypeSymbol,
UnresolvedType, DataSymbol, UnsupportedFortranType)
+from psyclone.psyGen import Kern
from psyclone.tests.utilities import get_base_path, get_invoke
@@ -311,7 +310,7 @@ def test_psy_data_node_incorrect_container():
# -----------------------------------------------------------------------------
-def test_psy_data_node_invokes_gocean1p0():
+def test_psy_data_node_invokes_gocean1p0(fortran_writer):
'''Check that an invoke is instrumented correctly
'''
_, invoke = get_invoke("test11_different_iterates_over_one_invoke.f90",
@@ -323,7 +322,7 @@ def test_psy_data_node_invokes_gocean1p0():
# Convert the invoke to code, and remove all new lines, to make
# regex matching easier
- code = str(invoke.gen()).replace("\n", "")
+ code = fortran_writer(invoke.schedule).replace("\n", "")
# First a simple test that the nesting is correct - the
# PSyData regions include both loops. Note that indeed
# the function 'compute_cv_code' is in the module file
@@ -331,9 +330,9 @@ def test_psy_data_node_invokes_gocean1p0():
# Since this is only PSyData, which by default does not supply
# variable information, the parameters to PreStart are both 0.
correct_re = ("subroutine invoke.*"
- "use psy_data_mod, only: PSyDataType.*"
- r"TYPE\(PSyDataType\), target, save :: psy_data.*"
- r"call psy_data%PreStart\(\"psy_single_invoke_different"
+ "use psy_data_mod, only : PSyDataType.*"
+ r"type\(PSyDataType\), save, target :: psy_data.*"
+ r"CALL psy_data % PreStart\(\"psy_single_invoke_different"
r"_iterates_over\", \"invoke_0-compute_cv_code-r0\","
r" 0, 0\).*"
"do j.*"
@@ -341,76 +340,16 @@ def test_psy_data_node_invokes_gocean1p0():
"call.*"
"end.*"
"end.*"
- r"call psy_data%PostEnd")
+ r"call psy_data % PostEnd")
assert re.search(correct_re, code, re.I) is not None
# Check that if gen() is called more than once the same PSyDataNode
# variables and region names are created:
- code_again = str(invoke.gen()).replace("\n", "")
+ code_again = fortran_writer(invoke.schedule).replace("\n", "")
assert code == code_again
-# -----------------------------------------------------------------------------
-def test_psy_data_node_options():
- '''Check that the options for PSyData work as expected.
- '''
- _, invoke = get_invoke("test11_different_iterates_over_one_invoke.f90",
- "gocean", idx=0, dist_mem=False)
- schedule = invoke.schedule
- data_trans = PSyDataTrans()
-
- data_trans.apply(schedule[0].loop_body)
- data_node = schedule[0].loop_body[0]
- assert isinstance(data_node, PSyDataNode)
-
- # 1) Test that the listed variables will appear in the list
- # ---------------------------------------------------------
- mod = ModuleGen(None, "test")
- data_node.gen_code(mod, options={"pre_var_list": [("", "a")],
- "post_var_list": [("", "b")]})
-
- out = "\n".join([str(i.root) for i in mod.children])
- expected = ['CALL psy_data%PreDeclareVariable("a", a)',
- 'CALL psy_data%PreDeclareVariable("b", b)',
- 'CALL psy_data%ProvideVariable("a", a)',
- 'CALL psy_data%PostStart',
- 'CALL psy_data%ProvideVariable("b", b)']
- for line in expected:
- assert line in out
-
- # 2) Test that variables suffixes are added as expected
- # -----------------------------------------------------
- mod = ModuleGen(None, "test")
- data_node.gen_code(mod, options={"pre_var_list": [("", "a")],
- "post_var_list": [("", "b")],
- "pre_var_postfix": "_pre",
- "post_var_postfix": "_post"})
-
- out = "\n".join([str(i.root) for i in mod.children])
- expected = ['CALL psy_data%PreDeclareVariable("a_pre", a)',
- 'CALL psy_data%PreDeclareVariable("b_post", b)',
- 'CALL psy_data%ProvideVariable("a_pre", a)',
- 'CALL psy_data%PostStart',
- 'CALL psy_data%ProvideVariable("b_post", b)']
- for line in expected:
- assert line in out
-
- # 3) Check that we don't get any declaration if there are no variables:
- # ---------------------------------------------------------------------
- mod = ModuleGen(None, "test")
- data_node.gen_code(mod, options={})
-
- out = "\n".join([str(i.root) for i in mod.children])
- # Only PreStart and PostEnd should appear
- assert "PreStart" in out
- assert "PreDeclareVariable" not in out
- assert "ProvideVariable" not in out
- assert "PreEnd" not in out
- assert "PostStart" not in out
- assert "PostEnd" in out
-
-
def test_psy_data_node_children_validation():
'''Test that children added to PSyDataNode are validated. PSyDataNode
accepts just one Schedule as its child.
@@ -497,7 +436,7 @@ def test_psy_data_node_lower_to_language_level_with_options():
codeblocks = schedule.walk(CodeBlock)
expected = ['CALL psy_data % PreStart("psy_single_invoke_different_'
- 'iterates_over", "invoke_0-r0", 1, 1)',
+ 'iterates_over", "invoke_0-compute_cv_code-r0", 1, 1)',
'CALL psy_data % PreDeclareVariable("a", a)',
'CALL psy_data % PreDeclareVariable("b", b)',
'CALL psy_data % PreEndDeclaration',
@@ -526,7 +465,7 @@ def test_psy_data_node_lower_to_language_level_with_options():
codeblocks = schedule.walk(CodeBlock)
expected = ['CALL psy_data % PreStart("psy_single_invoke_different_'
- 'iterates_over", "invoke_0-r0", 1, 1)',
+ 'iterates_over", "invoke_0-compute_cv_code-r0", 1, 1)',
'CALL psy_data % PreDeclareVariable("a_pre", a)',
'CALL psy_data % PreDeclareVariable("b_post", b)',
'CALL psy_data % PreEndDeclaration',
@@ -568,33 +507,7 @@ def test_psy_data_node_name_clash(fortran_writer):
options={"create_driver": True,
"region_name": ("import", "test")})
- # First test, use the old-style gen_code way:
- # -------------------------------------------
- code = str(invoke.gen())
-
- # Make sure the imported, clashing symbols 'f1' and 'f2' are renamed:
- assert "USE module_with_name_clash_mod, ONLY: f1_data_1=>f1_data" in code
- assert "USE module_with_name_clash_mod, ONLY: f2_data_1=>f2_data" in code
- assert ('CALL extract_psy_data%PreDeclareVariable("f1_data@'
- 'module_with_name_clash_mod", f1_data_1)' in code)
- assert ('CALL extract_psy_data%ProvideVariable("f1_data@'
- 'module_with_name_clash_mod", f1_data_1)' in code)
- assert ('CALL extract_psy_data%PreDeclareVariable("f2_data@'
- 'module_with_name_clash_mod", f2_data_1)' in code)
- assert ('CALL extract_psy_data%PreDeclareVariable("f2_data_post@'
- 'module_with_name_clash_mod", f2_data_1)' in code)
- assert ('CALL extract_psy_data%ProvideVariable("f2_data@'
- 'module_with_name_clash_mod", f2_data_1)' in code)
- assert ('CALL extract_psy_data%ProvideVariable("f2_data_post@'
- 'module_with_name_clash_mod", f2_data_1)' in code)
-
- # Second test, use lower_to_language_level:
- # -----------------------------------------
- invoke.schedule.children[0].lower_to_language_level()
-
- # Note that atm we cannot fortran_writer() the schedule, LFRic does not
- # yet fully support this. So we just lower each line individually:
- code = "".join([fortran_writer(i) for i in invoke.schedule.children])
+ code = fortran_writer(invoke.schedule)
assert ('CALL extract_psy_data % PreDeclareVariable("f1_data_post", '
'f1_data)' in code)
@@ -602,25 +515,21 @@ def test_psy_data_node_name_clash(fortran_writer):
'module_with_name_clash_mod", f1_data_1)' in code)
assert ('CALL extract_psy_data % PreDeclareVariable("f2_data@'
'module_with_name_clash_mod", f2_data_1)' in code)
- assert ('CALL extract_psy_data % PreDeclareVariable("f2_data@'
- 'module_with_name_clash_mod_post", f2_data_1)' in code)
+ assert ('CALL extract_psy_data % PreDeclareVariable("f2_data_post@'
+ 'module_with_name_clash_mod", f2_data_1)' in code)
assert ('CALL extract_psy_data % ProvideVariable("f1_data@'
'module_with_name_clash_mod", f1_data_1)' in code)
assert ('CALL extract_psy_data % ProvideVariable("f2_data@'
'module_with_name_clash_mod", f2_data_1)' in code)
- assert ('CALL extract_psy_data % ProvideVariable("f2_data@'
- 'module_with_name_clash_mod_post", f2_data_1)' in code)
+ assert ('CALL extract_psy_data % ProvideVariable("f2_data_post@'
+ 'module_with_name_clash_mod", f2_data_1)' in code)
# ----------------------------------------------------------------------------
def test_psy_data_node_lfric_inside_of_loop():
'''Test that if a PSyData node is inside a loop (which means the code will
already be generated by PSyIR), the required psydata variable is declared.
- ATM (TODO #1010) LFRicLoop.gen_code calls a fix_gen_code function in the
- PSyData node to add the declaration (which the PSyData node added to the
- symbol table, but since the symbol table is not used by gen_code, it
- would otherwise be missing).
'''
psy, invoke = get_invoke("1.0.1_single_named_invoke.f90",
@@ -634,13 +543,13 @@ def test_psy_data_node_lfric_inside_of_loop():
# This regex checks that the type is imported, the variable is declared,
# and that the psydata area is indeed inside of the loop
- correct_re = (r"USE psy_data_mod, ONLY: PSyDataType.*"
- r"TYPE\(PSyDataType\), target, save :: psy_data.*"
- r"DO cell = .*"
- r"CALL psy_data % PreStart.*"
- r"CALL testkern_code.*"
- r"CALL psy_data % PostEnd.*"
- r"END DO")
+ correct_re = (r"use psy_data_mod, only : PSyDataType.*"
+ r"type\(PSyDataType\), save, target :: psy_data.*"
+ r"do cell = .*"
+ r"call psy_data % PreStart.*"
+ r"call testkern_code.*"
+ r"call psy_data % PostEnd.*"
+ r"enddo")
assert re.search(correct_re, code, re.I) is not None
diff --git a/src/psyclone/tests/psyir/nodes/range_test.py b/src/psyclone/tests/psyir/nodes/range_test.py
index 45a420b4fc..4b81aedf28 100644
--- a/src/psyclone/tests/psyir/nodes/range_test.py
+++ b/src/psyclone/tests/psyir/nodes/range_test.py
@@ -36,7 +36,6 @@
''' pytest tests for the Range class. '''
-from __future__ import absolute_import
import pytest
from psyclone.psyir.symbols import ScalarType, DataSymbol, \
INTEGER_SINGLE_TYPE, REAL_SINGLE_TYPE
diff --git a/src/psyclone/tests/psyir/nodes/read_only_verify_test.py b/src/psyclone/tests/psyir/nodes/read_only_verify_test.py
index 8041190d20..0cee83cbe5 100644
--- a/src/psyclone/tests/psyir/nodes/read_only_verify_test.py
+++ b/src/psyclone/tests/psyir/nodes/read_only_verify_test.py
@@ -36,7 +36,6 @@
''' Module containing pytest tests for the ReadOnlyVerifyNode. '''
-from __future__ import absolute_import
from psyclone.psyir.nodes import ReadOnlyVerifyNode, CodeBlock, Routine, \
Reference, Return, IfBlock, Schedule
from psyclone.psyir.symbols import DataSymbol, INTEGER_TYPE
diff --git a/src/psyclone/tests/psyir/nodes/return_stmt_test.py b/src/psyclone/tests/psyir/nodes/return_stmt_test.py
index 24dceb8201..7914fd2367 100644
--- a/src/psyclone/tests/psyir/nodes/return_stmt_test.py
+++ b/src/psyclone/tests/psyir/nodes/return_stmt_test.py
@@ -38,7 +38,6 @@
''' Performs py.test tests on the Return PSyIR node. '''
-from __future__ import absolute_import
import pytest
from psyclone.psyir.nodes import Return
from psyclone.errors import GenerationError
diff --git a/src/psyclone/tests/psyir/nodes/schedule_test.py b/src/psyclone/tests/psyir/nodes/schedule_test.py
index 7e7ca9e865..cd54621bb7 100644
--- a/src/psyclone/tests/psyir/nodes/schedule_test.py
+++ b/src/psyclone/tests/psyir/nodes/schedule_test.py
@@ -38,7 +38,6 @@
''' Performs py.test tests on the Schedule PSyIR node. '''
-from __future__ import absolute_import
import os
import pytest
from psyclone.psyir.nodes import Schedule, Assignment, Range, Statement
diff --git a/src/psyclone/tests/psyir/nodes/structure_member_test.py b/src/psyclone/tests/psyir/nodes/structure_member_test.py
index adbcfce440..916727e5f2 100644
--- a/src/psyclone/tests/psyir/nodes/structure_member_test.py
+++ b/src/psyclone/tests/psyir/nodes/structure_member_test.py
@@ -38,7 +38,6 @@
''' Performs py.test tests on the StructureMember PSyIR node. '''
-from __future__ import absolute_import
import pytest
from psyclone.psyir import nodes
from psyclone.psyir import symbols
diff --git a/src/psyclone/tests/psyir/nodes/structure_reference_test.py b/src/psyclone/tests/psyir/nodes/structure_reference_test.py
index c40a78b6b4..fb4ab403e2 100644
--- a/src/psyclone/tests/psyir/nodes/structure_reference_test.py
+++ b/src/psyclone/tests/psyir/nodes/structure_reference_test.py
@@ -118,8 +118,8 @@ def test_struc_ref_create_errors():
''' Tests for the validation checks in the create method. '''
with pytest.raises(TypeError) as err:
_ = nodes.StructureReference.create(None, [])
- assert ("'symbol' argument to StructureReference.create() should be a "
- "DataSymbol but found 'NoneType'" in str(err.value))
+ assert ("The 'symbol' argument to StructureReference.create() should be a"
+ " DataSymbol but found 'NoneType'" in str(err.value))
with pytest.raises(TypeError) as err:
_ = nodes.StructureReference.create(
symbols.DataSymbol("grid", symbols.UnresolvedType()), [],
@@ -129,8 +129,9 @@ def test_struc_ref_create_errors():
with pytest.raises(TypeError) as err:
_ = nodes.StructureReference.create(
symbols.DataSymbol("fake", symbols.INTEGER_TYPE), [])
- assert ("symbol that is (or could be) a structure, however symbol "
- "'fake' has type 'Scalar" in str(err.value))
+ assert ("A StructureReference must refer to a symbol that is (or could be)"
+ " a structure, however symbol 'fake' has type 'Scalar"
+ in str(err.value))
with pytest.raises(TypeError) as err:
_ = nodes.StructureReference.create(
symbols.DataSymbol("grid", symbols.UnresolvedType()), 1)
diff --git a/src/psyclone/tests/psyir/symbols/data_type_symbol_test.py b/src/psyclone/tests/psyir/symbols/data_type_symbol_test.py
index dcce2aa35c..67495f686b 100644
--- a/src/psyclone/tests/psyir/symbols/data_type_symbol_test.py
+++ b/src/psyclone/tests/psyir/symbols/data_type_symbol_test.py
@@ -37,7 +37,6 @@
''' This module contains pytest tests for the DataTypeSymbol class. '''
-from __future__ import absolute_import
import pytest
from psyclone.psyir.symbols import DataTypeSymbol, UnresolvedType, Symbol, \
UnresolvedInterface, ArrayType, REAL_SINGLE_TYPE
diff --git a/src/psyclone/tests/psyir/transformations/acc_kernels_trans_test.py b/src/psyclone/tests/psyir/transformations/acc_kernels_trans_test.py
index e569d01b39..7e732966a0 100644
--- a/src/psyclone/tests/psyir/transformations/acc_kernels_trans_test.py
+++ b/src/psyclone/tests/psyir/transformations/acc_kernels_trans_test.py
@@ -134,10 +134,10 @@ def test_implicit_loop(fortran_reader, fortran_writer):
schedule = psyir.walk(Routine)[0]
acc_trans = ACCKernelsTrans()
acc_trans.apply(schedule.children[0:1], {"default_present": True})
- gen_code = fortran_writer(psyir)
+ code = fortran_writer(psyir)
assert (" !$acc kernels default(present)\n"
" sto_tmp(:,:) = 0.0_wp\n"
- " !$acc end kernels\n" in gen_code)
+ " !$acc end kernels\n" in code)
def test_multikern_if(fortran_reader, fortran_writer):
@@ -162,15 +162,15 @@ def test_multikern_if(fortran_reader, fortran_writer):
schedule = psyir.walk(Routine)[0]
acc_trans = ACCKernelsTrans()
acc_trans.apply(schedule.children[0:1], {"default_present": True})
- gen_code = fortran_writer(psyir)
+ code = fortran_writer(psyir)
assert (" !$acc kernels default(present)\n"
" if (do_this) then\n"
- " do jk = 1, 3, 1\n" in gen_code)
+ " do jk = 1, 3, 1\n" in code)
assert (" enddo\n"
" end if\n"
" !$acc end kernels\n"
"\n"
- "end program implicit_loop" in gen_code)
+ "end program implicit_loop" in code)
def test_kernels_within_if(fortran_reader, fortran_writer):
diff --git a/src/psyclone/tests/psyir/transformations/arrayassignment2loops_trans_test.py b/src/psyclone/tests/psyir/transformations/arrayassignment2loops_trans_test.py
index 1ca9e8859b..8aca66a6d4 100644
--- a/src/psyclone/tests/psyir/transformations/arrayassignment2loops_trans_test.py
+++ b/src/psyclone/tests/psyir/transformations/arrayassignment2loops_trans_test.py
@@ -679,17 +679,17 @@ def test_validate_rhs_plain_references(fortran_reader, fortran_writer):
" enddo\n"
" do idx_1 = LBOUND(x, dim=1), UBOUND(x, dim=1), 1\n"
" x(idx_1) = array(idx_1)\n"
- " enddo\n"
+ " enddo\n\n"
" ! ArrayAssignment2LoopsTrans cannot expand expression because it "
"contains the access 'unresolved' which is not a DataSymbol and "
"therefore cannot be guaranteed to be ScalarType. Resolving the import"
" that brings this variable into scope may help.\n"
- " x(:) = unresolved\n"
+ " x(:) = unresolved\n\n"
" ! ArrayAssignment2LoopsTrans cannot expand expression because it "
"contains the access 'unsupported' which is an UnsupportedFortran"
"Type('INTEGER, DIMENSION(:), OPTIONAL :: unsupported') and therefore "
"cannot be guaranteed to be ScalarType.\n"
- " x(:) = unsupported\n"
+ " x(:) = unsupported\n\n"
" ! ArrayAssignment2LoopsTrans cannot expand expression because it "
"contains the access 'ishtsi(map,scalar)' which is an UnresolvedType "
"and therefore cannot be guaranteed to be ScalarType.\n"
diff --git a/src/psyclone/tests/psyir/transformations/chunk_loop_trans_test.py b/src/psyclone/tests/psyir/transformations/chunk_loop_trans_test.py
index becbea6ad9..3ed6a67aa6 100644
--- a/src/psyclone/tests/psyir/transformations/chunk_loop_trans_test.py
+++ b/src/psyclone/tests/psyir/transformations/chunk_loop_trans_test.py
@@ -37,7 +37,6 @@
'''This module contains the unit tests for the ChunkLoopTrans module'''
-from __future__ import absolute_import, print_function
import os
import pytest
@@ -341,15 +340,15 @@ def test_chunkloop_trans_apply_pos():
chunktrans.apply(schedule.children[0])
code = str(psy.gen)
correct = \
- '''DO j_out_var = cu_fld%internal%ystart, cu_fld%internal%ystop, 32
- j_el_inner = MIN(j_out_var + (32 - 1), cu_fld%internal%ystop)
- DO j = j_out_var, j_el_inner, 1
- DO i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1
+ '''do j_out_var = cu_fld%internal%ystart, cu_fld%internal%ystop, 32
+ j_el_inner = MIN(j_out_var + (32 - 1), cu_fld%internal%ystop)
+ do j = j_out_var, j_el_inner, 1
+ do i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1
'''
assert correct in code
- correct = '''END DO
- END DO
- END DO'''
+ correct = '''enddo
+ enddo
+ enddo'''
assert correct in code
loop = schedule.walk(Loop)[0]
assert 'chunked' in loop.annotations
@@ -367,15 +366,15 @@ def test_chunkloop_trans_apply_neg():
chunktrans.apply(schedule.children[0])
code = str(psy.gen)
correct = \
- '''DO j_out_var = cu_fld%internal%ystart, cu_fld%internal%ystop, -32
- j_el_inner = MAX(j_out_var - (32 + 1), cu_fld%internal%ystop)
- DO j = j_out_var, j_el_inner, -1
- DO i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1
+ '''do j_out_var = cu_fld%internal%ystart, cu_fld%internal%ystop, -32
+ j_el_inner = MAX(j_out_var - (32 + 1), cu_fld%internal%ystop)
+ do j = j_out_var, j_el_inner, -1
+ do i = cu_fld%internal%xstart, cu_fld%internal%xstop, 1
'''
assert correct in code
- correct = '''END DO
- END DO
- END DO'''
+ correct = '''enddo
+ enddo
+ enddo'''
assert correct in code
@@ -390,9 +389,9 @@ def test_chunkloop_trans_apply_with_options():
chunktrans.apply(schedule.children[0], {'chunksize': 4})
code = str(psy.gen)
correct = \
- '''DO j_out_var = cu_fld%internal%ystart, cu_fld%internal%ystop, 4
- j_el_inner = MIN(j_out_var + (4 - 1), cu_fld%internal%ystop)
- DO j = j_out_var, j_el_inner, 1
+ '''do j_out_var = cu_fld%internal%ystart, cu_fld%internal%ystop, 4
+ j_el_inner = MIN(j_out_var + (4 - 1), cu_fld%internal%ystop)
+ do j = j_out_var, j_el_inner, 1
'''
assert correct in code
diff --git a/src/psyclone/tests/psyir/transformations/fold_conditional_return_expressions_trans_test.py b/src/psyclone/tests/psyir/transformations/fold_conditional_return_expressions_trans_test.py
index b37b3cb148..4186514035 100644
--- a/src/psyclone/tests/psyir/transformations/fold_conditional_return_expressions_trans_test.py
+++ b/src/psyclone/tests/psyir/transformations/fold_conditional_return_expressions_trans_test.py
@@ -37,7 +37,6 @@
'''Module containing tests for the FoldConditionalReturnExpressionsTrans
transformation.'''
-from __future__ import absolute_import
import pytest
from psyclone.psyir.transformations import \
diff --git a/src/psyclone/tests/psyir/transformations/kernel_transformation_test.py b/src/psyclone/tests/psyir/transformations/kernel_transformation_test.py
index 43af4c0545..fba35714f4 100644
--- a/src/psyclone/tests/psyir/transformations/kernel_transformation_test.py
+++ b/src/psyclone/tests/psyir/transformations/kernel_transformation_test.py
@@ -88,7 +88,7 @@ def test_new_kernel_file(kernel_outputdir, monkeypatch, fortran_reader):
code = str(psy.gen).lower()
# Work out the value of the tag used to re-name the kernel
tag = re.search('use continuity(.+?)_mod', code).group(1)
- assert f"use continuity{tag}_mod, only: continuity{tag}_code" in code
+ assert f"use continuity{tag}_mod, only : continuity{tag}_code" in code
assert f"call continuity{tag}_code(" in code
# The kernel and module name should have gained the tag just identified
# and be written to the CWD
@@ -464,8 +464,8 @@ def test_1kern_trans(kernel_outputdir):
code = str(psy.gen).lower()
tag = re.search('use testkern(.+?)_mod', code).group(1)
# We should have a USE for the original kernel and a USE for the new one
- assert f"use testkern{tag}_mod, only: testkern{tag}_code" in code
- assert "use testkern_mod, only: testkern_code" in code
+ assert f"use testkern{tag}_mod, only : testkern{tag}_code" in code
+ assert "use testkern_mod, only : testkern_code" in code
# Similarly, we should have calls to both the original and new kernels
assert "call testkern_code(" in code
assert f"call testkern{tag}_code(" in code
@@ -491,7 +491,7 @@ def test_2kern_trans(kernel_outputdir):
# Find the tags added to the kernel/module names
for match in re.finditer('use testkern_any_space_2(.+?)_mod', code):
tag = match.group(1)
- assert (f"use testkern_any_space_2{tag}_mod, only: "
+ assert (f"use testkern_any_space_2{tag}_mod, only : "
f"testkern_any_space_2{tag}_code" in code)
assert f"call testkern_any_space_2{tag}_code(" in code
filepath = os.path.join(str(kernel_outputdir),
diff --git a/src/psyclone/tests/psyir/transformations/loop_fusion_test.py b/src/psyclone/tests/psyir/transformations/loop_fusion_test.py
index 07a3988722..cba929697c 100644
--- a/src/psyclone/tests/psyir/transformations/loop_fusion_test.py
+++ b/src/psyclone/tests/psyir/transformations/loop_fusion_test.py
@@ -40,7 +40,6 @@
'''
from unittest import mock
-
import pytest
from psyclone.psyir.nodes import Literal, Loop, Schedule, Return
diff --git a/src/psyclone/tests/psyir/transformations/loop_trans_test.py b/src/psyclone/tests/psyir/transformations/loop_trans_test.py
index c100767ef3..ac8374f0df 100644
--- a/src/psyclone/tests/psyir/transformations/loop_trans_test.py
+++ b/src/psyclone/tests/psyir/transformations/loop_trans_test.py
@@ -37,7 +37,6 @@
''' Module containing tests for the LoopTrans class. Since it is abstract we
have to test it using various sub-classes. '''
-from __future__ import absolute_import
import inspect
import pytest
from psyclone.psyir.transformations import LoopFuseTrans, LoopTrans, \
diff --git a/src/psyclone/tests/psyir/transformations/omp_task_transformations_test.py b/src/psyclone/tests/psyir/transformations/omp_task_transformations_test.py
index 47e2896edf..ed45188d34 100644
--- a/src/psyclone/tests/psyir/transformations/omp_task_transformations_test.py
+++ b/src/psyclone/tests/psyir/transformations/omp_task_transformations_test.py
@@ -35,7 +35,6 @@
'''
API-agnostic tests for OpenMP task transformation class.
'''
-from __future__ import absolute_import, print_function
import os
import pytest
diff --git a/src/psyclone/tests/psyir/transformations/omp_taskloop_transformations_test.py b/src/psyclone/tests/psyir/transformations/omp_taskloop_transformations_test.py
index 535a04df13..c2f3dd721d 100644
--- a/src/psyclone/tests/psyir/transformations/omp_taskloop_transformations_test.py
+++ b/src/psyclone/tests/psyir/transformations/omp_taskloop_transformations_test.py
@@ -35,7 +35,6 @@
'''
API-agnostic tests for OpenMP task transformation classes.
'''
-from __future__ import absolute_import, print_function
import os
import pytest
@@ -113,7 +112,7 @@ def test_omptaskloop_getters_and_setters():
def test_omptaskloop_apply(monkeypatch):
- '''Check that the gen_code method in the OMPTaskloopDirective
+ '''Check that the lowering method in the OMPTaskloopDirective
class generates the expected code when passing options to
the OMPTaskloopTrans's apply method and correctly overrides the
taskloop's inbuilt value. Use the gocean API.
@@ -139,15 +138,15 @@ class generates the expected code when passing options to
clauses = " nogroup"
assert (
- " !$omp parallel default(shared), private(i,j)\n" +
- " !$omp master\n" +
- f" !$omp taskloop{clauses}\n" +
- " DO" in code)
+ " !$omp parallel default(shared), private(i,j)\n" +
+ " !$omp master\n" +
+ f" !$omp taskloop{clauses}\n" +
+ " do" in code)
assert (
- " END DO\n" +
- " !$omp end taskloop\n" +
- " !$omp end master\n" +
- " !$omp end parallel" in code)
+ " enddo\n" +
+ " !$omp end taskloop\n" +
+ " !$omp end master\n" +
+ " !$omp end parallel" in code)
assert taskloop_node.begin_string() == "omp taskloop"
diff --git a/src/psyclone/tests/psyir/transformations/profile_test.py b/src/psyclone/tests/psyir/transformations/profile_test.py
index 67a2f76efe..4e6013a0ec 100644
--- a/src/psyclone/tests/psyir/transformations/profile_test.py
+++ b/src/psyclone/tests/psyir/transformations/profile_test.py
@@ -126,7 +126,7 @@ def test_profile_errors2():
# -----------------------------------------------------------------------------
-def test_profile_invokes_gocean1p0():
+def test_profile_invokes_gocean1p0(fortran_writer):
'''Check that an invoke is instrumented correctly
'''
Profiler.set_options([Profiler.INVOKES], "gocean")
@@ -136,16 +136,16 @@ def test_profile_invokes_gocean1p0():
# Convert the invoke to code, and remove all new lines, to make
# regex matching easier
- code = str(invoke.gen()).replace("\n", "")
+ code = fortran_writer(invoke.schedule).replace("\n", "")
# First a simple test that the nesting is correct - the
# profile regions include both loops. Note that indeed
# the function 'compute_cv_code' is in the module file
# kernel_ne_offset_mod.
correct_re = ("subroutine invoke.*"
- "use profile_psy_data_mod, ONLY: profile_PSyDataType.*"
- r"TYPE\(profile_PsyDataType\), target, save :: profile_"
- r"psy_data.*call profile_psy_data%PreStart\(\"psy_single_"
+ "use profile_psy_data_mod, only : profile_PSyDataType.*"
+ r"type\(profile_PsyDataType\), save, target :: profile_"
+ r"psy_data.*call profile_psy_data % PreStart\(\"psy_single_"
r"invoke_different_iterates_over\", \"invoke_0-r0\", 0, "
r"0\).*"
"do j.*"
@@ -153,12 +153,12 @@ def test_profile_invokes_gocean1p0():
"call.*"
"end.*"
"end.*"
- r"call profile_psy_data%PostEnd")
+ r"call profile_psy_data % PostEnd")
assert re.search(correct_re, code, re.I) is not None
# Check that if gen() is called more than once the same profile
# variables and region names are created:
- code_again = str(invoke.gen()).replace("\n", "")
+ code_again = fortran_writer(invoke.schedule).replace("\n", "")
assert code == code_again
# Test that two kernels in one invoke get instrumented correctly.
@@ -168,13 +168,13 @@ def test_profile_invokes_gocean1p0():
# Convert the invoke to code, and remove all new lines, to make
# regex matching easier
- code = str(invoke.gen()).replace("\n", "")
+ code = fortran_writer(invoke.schedule).replace("\n", "")
correct_re = ("subroutine invoke.*"
- "use profile_psy_data_mod, only: profile_PSyDataType.*"
- r"TYPE\(profile_PSyDataType\), target, save :: "
+ "use profile_psy_data_mod, only : profile_PSyDataType.*"
+ r"type\(profile_PSyDataType\), save, target :: "
"profile_psy_data.*"
- r"call profile_psy_data%PreStart\(\"psy_single_invoke_two"
+ r"call profile_psy_data % PreStart\(\"psy_single_invoke_two"
r"_kernels\", \"invoke_0-r0\", 0, 0\).*"
"do j.*"
"do i.*"
@@ -186,13 +186,13 @@ def test_profile_invokes_gocean1p0():
"call.*"
"end.*"
"end.*"
- r"call profile_psy_data%PostEnd")
+ r"call profile_psy_data % PostEnd")
assert re.search(correct_re, code, re.I) is not None
Profiler._options = []
# -----------------------------------------------------------------------------
-def test_unique_region_names():
+def test_unique_region_names(fortran_writer):
'''Test that unique region names are created even when the kernel
names are identical.'''
@@ -204,18 +204,18 @@ def test_unique_region_names():
# Convert the invoke to code, and remove all new lines, to make
# regex matching easier
- code = str(invoke.gen()).replace("\n", "")
+ code = fortran_writer(invoke.schedule).replace("\n", "")
# This regular expression puts the region names into groups.
# Make sure that the created regions have different names, even
# though the kernels have the same name.
correct_re = ("subroutine invoke.*"
- "use profile_psy_Data_mod, only: profile_PSyDataType.*"
- r"TYPE\(profile_PSyDataType\), target, save :: "
+ "use profile_psy_Data_mod, only : profile_PSyDataType.*"
+ r"type\(profile_PSyDataType\), save, target :: "
"profile_psy_data.*"
- r"TYPE\(profile_PSyDataType\), target, save :: "
+ r"type\(profile_PSyDataType\), save, target :: "
"profile_psy_data.*"
- r"call profile_psy_data.*%PreStart\(\"psy_single_invoke_two"
+ r"call profile_psy_data.*% PreStart\(\"psy_single_invoke_two"
r"_kernels\", "
r"\"invoke_0-compute_cu_code-r0\", 0, 0\).*"
"do j.*"
@@ -223,21 +223,20 @@ def test_unique_region_names():
"call compute_cu_code.*"
"end.*"
"end.*"
- r"call profile_psy_data.*%PostEnd.*"
- r"call profile_psy_data.*%PreStart\(\"psy_single_invoke_two_"
- r"kernels\", \"invoke_0-compute_cu_code-r1\", 0, 0\).*"
+ r"call profile_psy_data.*% PostEnd.*"
+ r"call profile_psy_data.*% PreStart\(\"psy_single_invoke_"
+ r"two_kernels\", \"invoke_0-compute_cu_code-r1\", 0, 0\).*"
"do j.*"
"do i.*"
"call compute_cu_code.*"
"end.*"
"end.*"
- r"call profile_psy_data.*%PostEnd")
-
+ r"call profile_psy_data.*% PostEnd")
assert re.search(correct_re, code, re.I) is not None
# -----------------------------------------------------------------------------
-def test_profile_kernels_gocean1p0():
+def test_profile_kernels_gocean1p0(fortran_writer):
'''Check that all kernels are instrumented correctly
'''
Profiler.set_options([Profiler.KERNELS], "gocean")
@@ -247,7 +246,7 @@ def test_profile_kernels_gocean1p0():
# Convert the invoke to code, and remove all new lines, to make
# regex matching easier
- code = str(invoke.gen()).replace("\n", "")
+ code = fortran_writer(invoke.schedule).replace("\n", "")
# Test that kernel profiling works in case of two kernel calls
# in a single invoke subroutine - i.e. we need to have one profile
@@ -257,27 +256,27 @@ def test_profile_kernels_gocean1p0():
# the name could be changed to avoid duplicates (depending on order
# in which the tests are executed).
correct_re = ("subroutine invoke.*"
- "use profile_psy_data_mod, only: profile_PSyDataType.*"
- r"TYPE\(profile_PSyDataType\), target, save :: "
+ "use profile_psy_data_mod, only : profile_PSyDataType.*"
+ r"type\(profile_PSyDataType\), save, target :: "
"profile_psy_data.*"
- r"TYPE\(profile_PSyDataType\), target, save :: "
+ r"type\(profile_PSyDataType\), save, target :: "
"profile_psy_data.*"
- r"call (?P\w*)%PreStart\(\"psy_single_invoke_two"
+ r"call (?P\w*) % PreStart\(\"psy_single_invoke_two"
r"_kernels\", \"invoke_0-compute_cu_code-r0\", 0, 0\).*"
"do j.*"
"do i.*"
"call.*"
"end.*"
"end.*"
- r"call (?P=profile1)%PostEnd.*"
- r"call (?P\w*)%PreStart\(\"psy_single_invoke_two"
+ r"call (?P=profile1) % PostEnd.*"
+ r"call (?P\w*) % PreStart\(\"psy_single_invoke_two"
r"_kernels\", \"invoke_0-time_smooth_code-r1\", 0, 0\).*"
"do j.*"
"do i.*"
"call.*"
"end.*"
"end.*"
- r"call (?P=profile2)%PostEnd")
+ r"call (?P=profile2) % PostEnd")
groups = re.search(correct_re, code, re.I)
assert groups is not None
assert groups.group(1) != groups.group(2)
@@ -286,7 +285,7 @@ def test_profile_kernels_gocean1p0():
# -----------------------------------------------------------------------------
-def test_profile_named_gocean1p0():
+def test_profile_named_gocean1p0(fortran_writer):
'''Check that the gocean 1.0 API is instrumented correctly when the
profile name is supplied by the user.
@@ -297,14 +296,14 @@ def test_profile_named_gocean1p0():
profile_trans = ProfileTrans()
options = {"region_name": (psy.name, invoke.name)}
profile_trans.apply(schedule.children, options=options)
- result = str(invoke.gen())
- assert ("CALL profile_psy_data%PreStart("
+ result = fortran_writer(invoke.schedule)
+ assert ("CALL profile_psy_data % PreStart("
"\"psy_single_invoke_different_iterates_over\", "
"\"invoke_0\", 0, 0)") in result
# -----------------------------------------------------------------------------
-def test_profile_invokes_dynamo0p3():
+def test_profile_invokes_dynamo0p3(fortran_writer):
'''Check that a Dynamo 0.3 invoke is instrumented correctly
'''
Profiler.set_options([Profiler.INVOKES], "lfric")
@@ -315,18 +314,18 @@ def test_profile_invokes_dynamo0p3():
# Convert the invoke to code, and remove all new lines, to make
# regex matching easier
- code = str(invoke.gen()).replace("\n", "")
+ code = fortran_writer(invoke.schedule).replace("\n", "")
correct_re = ("subroutine invoke.*"
- "use profile_psy_data_mod, only: profile_PSyDataType.*"
- r"TYPE\(profile_PSyDataType\), target, save :: "
+ "use profile_psy_data_mod, only : profile_PSyDataType.*"
+ r"type\(profile_PSyDataType\), save, target :: "
"profile_psy_data.*"
- r"call profile_psy_data%PreStart\(\"single_invoke_psy\", "
+ r"call profile_psy_data % PreStart\(\"single_invoke_psy\", "
r"\"invoke_0_testkern_type-testkern_code-r0\", 0, 0\).*"
"do cell.*"
"call.*"
"end.*"
- r"call profile_psy_data%PostEnd")
+ r"call profile_psy_data % PostEnd")
assert re.search(correct_re, code, re.I) is not None
# Next test two kernels in one invoke:
@@ -334,15 +333,15 @@ def test_profile_invokes_dynamo0p3():
Profiler.add_profile_nodes(invoke.schedule, Loop)
# Convert the invoke to code, and remove all new lines, to make
# regex matching easier
- code = str(invoke.gen()).replace("\n", "")
+ code = fortran_writer(invoke.schedule).replace("\n", "")
# The .* after testkern_code is necessary since the name can be changed
# by PSyclone to avoid name duplications.
correct_re = ("subroutine invoke.*"
- "use profile_psy_data_mod, only: profile_PSyDataType.*"
- r"TYPE\(profile_PSyDataType\), target, save :: "
+ "use profile_psy_data_mod, only : profile_PSyDataType.*"
+ r"type\(profile_PSyDataType\), save, target :: "
"profile_psy_data.*"
- r"call profile_psy_data%PreStart\(\"multi_invoke_psy\", "
+ r"call profile_psy_data % PreStart\(\"multi_invoke_psy\", "
r"\"invoke_0-r0.*\", 0, 0\).*"
"do cell.*"
"call.*"
@@ -350,25 +349,25 @@ def test_profile_invokes_dynamo0p3():
"do cell.*"
"call.*"
"end.*"
- r"call profile_psy_data%PostEnd")
+ r"call profile_psy_data % PostEnd")
assert re.search(correct_re, code, re.I) is not None
# Lastly, test an invoke whose first kernel is a builtin
_, invoke = get_invoke("15.1.1_X_plus_Y_builtin.f90", "lfric", idx=0)
Profiler.add_profile_nodes(invoke.schedule, Loop)
- code = str(invoke.gen())
- assert "USE profile_psy_data_mod, ONLY: profile_PSyDataType" in code
- assert "TYPE(profile_PSyDataType), target, save :: profile_psy_data" \
+ code = fortran_writer(invoke.schedule)
+ assert "use profile_psy_data_mod, only : profile_PSyDataType" in code
+ assert "type(profile_PSyDataType), save, target :: profile_psy_data" \
in code
- assert "CALL profile_psy_data%PreStart(\"single_invoke_psy\", "\
+ assert "CALL profile_psy_data % PreStart(\"single_invoke_psy\", "\
"\"invoke_0-x_plus_y-r0\", 0, 0)" in code
- assert "CALL profile_psy_data%PostEnd" in code
+ assert "CALL profile_psy_data % PostEnd" in code
Profiler._options = []
# -----------------------------------------------------------------------------
-def test_profile_kernels_dynamo0p3():
+def test_profile_kernels_dynamo0p3(fortran_writer):
'''Check that all kernels are instrumented correctly in a
Dynamo 0.3 invoke.
'''
@@ -378,50 +377,35 @@ def test_profile_kernels_dynamo0p3():
# Convert the invoke to code, and remove all new lines, to make
# regex matching easier
- code = str(invoke.gen()).replace("\n", "")
+ code = fortran_writer(invoke.schedule).replace("\n", "")
correct_re = ("subroutine invoke.*"
- "use profile_psy_data_mod, only: profile_PSyDataType.*"
- r"TYPE\(profile_PSyDataType\), target, save :: "
+ "use profile_psy_data_mod, only : profile_PSyDataType.*"
+ r"type\(profile_PSyDataType\), save, target :: "
"profile_psy_data.*"
- r"call profile_psy_data%PreStart\(\"single_invoke_psy\", "
+ r"CALL profile_psy_data % PreStart\(\"single_invoke_psy\", "
r"\"invoke_0_testkern_type-testkern_code-r0.*\", 0, 0\).*"
"do cell.*"
"call.*"
"end.*"
- r"call profile_psy_data%PostEnd")
+ r"CALL profile_psy_data % PostEnd")
assert re.search(correct_re, code, re.I) is not None
_, invoke = get_invoke("1.2_multi_invoke.f90", "lfric", idx=0)
Profiler.add_profile_nodes(invoke.schedule, Loop)
- # Convert the invoke to code, and remove all new lines, to make
- # regex matching easier
- code = str(invoke.gen()).replace("\n", "")
+ # Convert the invoke to code
+ code = fortran_writer(invoke.schedule)
- correct_re = ("subroutine invoke.*"
- "use profile_psy_data_mod, only: profile_PSyDataType.*"
- r"TYPE\(profile_PSyDataType\), target, save :: "
- r"(?P\w*) .*"
- r"TYPE\(profile_PSyDataType\), target, save :: "
- r"(?P\w*) .*"
- r"call (?P=profile1)%PreStart\(\"multi_invoke_psy\", "
- r"\"invoke_0-testkern_code-r0\", 0, 0\).*"
- "do cell.*"
- "call.*"
- "end.*"
- r"call (?P=profile1)%PostEnd.*"
- r"call (?P=profile2)%PreStart\(\"multi_invoke_psy\", "
- r"\"invoke_0-testkern_code-r1\", 0, 0\).*"
- "do cell.*"
- "call.*"
- "end.*"
- r"call (?P=profile2)%PostEnd")
-
- groups = re.search(correct_re, code, re.I)
- assert groups is not None
# Check that the variables are different
- assert groups.group(1) != groups.group(2)
+ assert ("type(profile_PSyDataType), save, target :: profile_psy_data\n"
+ in code)
+ assert ("type(profile_PSyDataType), save, target :: profile_psy_data_1\n"
+ in code)
+ assert ("CALL profile_psy_data % PreStart(\"multi_invoke_psy\", "
+ "\"invoke_0-testkern_code-r0\", 0, 0)" in code)
+ assert ("CALL profile_psy_data_1 % PreStart(\"multi_invoke_psy\", "
+ "\"invoke_0-testkern_code-r1\", 0, 0)" in code)
Profiler._options = []
@@ -433,25 +417,25 @@ def test_profile_fused_kernels_dynamo0p3():
one Kernel inside a loop).
'''
Profiler.set_options([Profiler.KERNELS], "lfric")
- _, invoke = get_invoke("1.2_multi_invoke.f90", "lfric", idx=0,
- dist_mem=False)
+ psy, invoke = get_invoke("1.2_multi_invoke.f90", "lfric", idx=0,
+ dist_mem=False)
fuse_trans = LFRicLoopFuseTrans()
loops = invoke.schedule.walk(Loop)
fuse_trans.apply(loops[0], loops[1])
Profiler.add_profile_nodes(invoke.schedule, Loop)
- code = str(invoke.gen())
+ code = psy.gen
expected = '''\
- CALL profile_psy_data%PreStart("multi_invoke_psy", "invoke_0-r0", 0, 0)
- DO cell = loop0_start, loop0_stop, 1
- CALL testkern_code(nlayers_f1, a, f1_data, f2_data, m1_data, m2_data, \
+ CALL profile_psy_data % PreStart("multi_invoke_psy", "invoke_0-r0", 0, 0)
+ do cell = loop0_start, loop0_stop, 1
+ call testkern_code(nlayers_f1, a, f1_data, f2_data, m1_data, m2_data, \
ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, \
undf_w3, map_w3(:,cell))
- CALL testkern_code(nlayers_f1, a, f1_data, f3_data, m2_data, m1_data, \
+ call testkern_code(nlayers_f1, a, f1_data, f3_data, m2_data, m1_data, \
ndf_w1, undf_w1, map_w1(:,cell), ndf_w2, undf_w2, map_w2(:,cell), ndf_w3, \
undf_w3, map_w3(:,cell))
- END DO
- CALL profile_psy_data%PostEnd
+ enddo
+ CALL profile_psy_data % PostEnd
'''
assert expected in code
@@ -484,24 +468,24 @@ def test_profile_kernels_in_directive_dynamo0p3():
Check that a kernel is instrumented correctly if it is within a directive.
'''
Profiler.set_options([Profiler.KERNELS], "lfric")
- _, invoke = get_invoke("1_single_invoke_w3.f90", "lfric", idx=0,
- dist_mem=False)
+ psy, invoke = get_invoke("1_single_invoke_w3.f90", "lfric", idx=0,
+ dist_mem=False)
ktrans = ACCKernelsTrans()
loop = invoke.schedule.walk(Loop)[0]
ktrans.apply(loop)
Profiler.add_profile_nodes(invoke.schedule, Loop)
- code = str(invoke.gen())
+ code = psy.gen
expected = '''\
- CALL profile_psy_data%PreStart("single_invoke_w3_psy", \
+ CALL profile_psy_data % PreStart("single_invoke_w3_psy", \
"invoke_0_testkern_w3_type-testkern_w3_code-r0", 0, 0)
- !$acc kernels
- DO cell = loop0_start, loop0_stop, 1
+ !$acc kernels
+ do cell = loop0_start, loop0_stop, 1
'''
assert expected in code
# -----------------------------------------------------------------------------
-def test_profile_named_dynamo0p3():
+def test_profile_named_dynamo0p3(fortran_writer):
'''Check that the Dynamo 0.3 API is instrumented correctly when the
profile name is supplied by the user.
@@ -511,8 +495,8 @@ def test_profile_named_dynamo0p3():
profile_trans = ProfileTrans()
options = {"region_name": (psy.name, invoke.name)}
profile_trans.apply(schedule.children, options=options)
- result = str(invoke.gen())
- assert ("CALL profile_psy_data%PreStart(\"single_invoke_psy\", "
+ result = fortran_writer(invoke.schedule)
+ assert ("CALL profile_psy_data % PreStart(\"single_invoke_psy\", "
"\"invoke_0_testkern_type\", 0, 0)") in result
@@ -616,7 +600,7 @@ def test_transform_errors():
# -----------------------------------------------------------------------------
-def test_region():
+def test_region(fortran_writer):
''' Tests that the profiling transform works correctly when a region of
code is specified that does not cover the full invoke and also
contains multiple kernels.
@@ -630,22 +614,22 @@ def test_region():
prt.apply(schedule[0:4])
# Two loops.
prt.apply(schedule[1:3])
- result = str(invoke.gen())
- assert ("CALL profile_psy_data%PreStart(\"multi_functions_multi_invokes_"
+ result = fortran_writer(invoke.schedule)
+ assert ("CALL profile_psy_data % PreStart(\"multi_functions_multi_invokes_"
"psy\", \"invoke_0-r0\", 0, 0)" in result)
- assert ("CALL profile_psy_data_1%PreStart(\"multi_functions_multi_"
+ assert ("CALL profile_psy_data_1 % PreStart(\"multi_functions_multi_"
"invokes_psy\", \"invoke_0-r1\", 0, 0)" in result)
# Make nested profiles.
prt.apply(schedule[1].psy_data_body[1])
prt.apply(schedule)
- result = str(invoke.gen())
- assert ("CALL profile_psy_data_3%PreStart(\"multi_functions_multi_"
+ result = fortran_writer(invoke.schedule)
+ assert ("CALL profile_psy_data_3 % PreStart(\"multi_functions_multi_"
"invokes_psy\", \"invoke_0-r0\", 0, 0)" in result)
- assert ("CALL profile_psy_data%PreStart(\"multi_functions_multi_"
+ assert ("CALL profile_psy_data % PreStart(\"multi_functions_multi_"
"invokes_psy\", \"invoke_0-r1\", 0, 0)" in result)
- assert ("CALL profile_psy_data_1%PreStart(\"multi_functions_multi_"
+ assert ("CALL profile_psy_data_1 % PreStart(\"multi_functions_multi_"
"invokes_psy\", \"invoke_0-r2\", 0, 0)" in result)
- assert ("CALL profile_psy_data_2%PreStart(\"multi_functions_multi_"
+ assert ("CALL profile_psy_data_2 % PreStart(\"multi_functions_multi_"
"invokes_psy\", \"invoke_0-testkern_code-r3\", 0, 0)" in result)
@@ -655,8 +639,8 @@ def test_multi_prefix_profile(monkeypatch):
different profiling tools in the same invoke.
'''
- _, invoke = get_invoke("3.1_multi_functions_multi_invokes.f90",
- "lfric", name="invoke_0", dist_mem=True)
+ psy, invoke = get_invoke("3.1_multi_functions_multi_invokes.f90",
+ "lfric", name="invoke_0", dist_mem=True)
schedule = invoke.schedule
prt = ProfileTrans()
config = Config.get()
@@ -667,34 +651,32 @@ def test_multi_prefix_profile(monkeypatch):
prt.apply(schedule[0:4], options={"prefix": "tool1"})
# Use the default prefix for the two loops.
prt.apply(schedule[1:3])
- result = str(invoke.gen())
+ result = psy.gen
- assert (" USE profile_psy_data_mod, ONLY: profile_PSyDataType\n" in
+ assert (" use profile_psy_data_mod, only : profile_PSyDataType\n" in
result)
- assert " USE tool1_psy_data_mod, ONLY: tool1_PSyDataType" in result
- assert (" TYPE(profile_PSyDataType), target, save :: "
- "profile_psy_data\n"
- " TYPE(tool1_PSyDataType), target, save :: tool1_psy_data"
+ assert " use tool1_psy_data_mod, only : tool1_PSyDataType" in result
+ assert (" type(profile_PSyDataType), save, target :: "
+ "profile_psy_data\n" in result)
+ assert (" type(tool1_PSyDataType), save, target :: tool1_psy_data"
in result)
- assert (" ! Call kernels and communication routines\n"
- " !\n"
- " CALL tool1_psy_data%PreStart(\"multi_functions_multi_"
+ assert (
+ " CALL tool1_psy_data % PreStart(\"multi_functions_multi_"
"invokes_psy\", \"invoke_0-r0\", 0, 0)\n"
- " IF (f1_proxy%is_dirty(depth=1)) THEN\n" in result)
+ " if (f1_proxy%is_dirty(depth=1)) then\n" in result)
assert "loop0_stop = mesh%get_last_halo_cell(1)\n" in result
assert "loop2_stop = mesh%get_last_halo_cell(1)\n" in result
- assert (" CALL tool1_psy_data%PostEnd\n"
- " CALL profile_psy_data%PreStart(\"multi_functions_multi_"
+ assert (" CALL tool1_psy_data % PostEnd\n"
+ " CALL profile_psy_data % PreStart(\"multi_functions_multi_"
"invokes_psy\", \"invoke_0-r1\", 0, 0)\n"
- " DO cell = loop0_start, loop0_stop, 1\n" in result)
- assert (" CALL f1_proxy%set_dirty()\n"
- " !\n"
- " CALL profile_psy_data%PostEnd\n"
- " DO cell = loop2_start, loop2_stop, 1\n" in result)
+ " do cell = loop0_start, loop0_stop, 1\n" in result)
+ assert (" call f1_proxy%set_dirty()\n"
+ " CALL profile_psy_data % PostEnd\n"
+ " do cell = loop2_start, loop2_stop, 1\n" in result)
# -----------------------------------------------------------------------------
-def test_omp_transform():
+def test_omp_transform(fortran_writer):
'''Tests that the profiling transform works correctly with OMP
parallelisation.'''
@@ -712,43 +694,43 @@ def test_omp_transform():
prt.apply(schedule[0])
correct = (
- " CALL profile_psy_data%PreStart(\"psy_test27_loop_swap\", "
+ " CALL profile_psy_data % PreStart(\"psy_test27_loop_swap\", "
"\"invoke_loop1-bc_ssh_code-r0\", 0, 0)\n"
- " !$omp parallel default(shared), private(i,j)\n"
- " !$omp do schedule(static)\n"
- " DO j = t%internal%ystart, t%internal%ystop, 1\n"
- " DO i = t%internal%xstart, t%internal%xstop, 1\n"
- " CALL bc_ssh_code(i, j, 1, t%data, t%grid%tmask)\n"
- " END DO\n"
- " END DO\n"
- " !$omp end do\n"
- " !$omp end parallel\n"
- " CALL profile_psy_data%PostEnd")
- code = str(invoke.gen())
+ " !$omp parallel default(shared), private(i,j)\n"
+ " !$omp do schedule(static)\n"
+ " do j = t%internal%ystart, t%internal%ystop, 1\n"
+ " do i = t%internal%xstart, t%internal%xstop, 1\n"
+ " call bc_ssh_code(i, j, 1, t%data, t%grid%tmask)\n"
+ " enddo\n"
+ " enddo\n"
+ " !$omp end do\n"
+ " !$omp end parallel\n"
+ " CALL profile_psy_data % PostEnd")
+ code = fortran_writer(invoke.schedule)
assert correct in code
# Now add another profile node between the omp parallel and omp do
# directives:
prt.apply(schedule[0].psy_data_body[0].dir_body[0])
- code = str(invoke.gen())
-
- correct = \
- "CALL profile_psy_data%PreStart(\"psy_test27_loop_swap\", " + \
- '''"invoke_loop1-bc_ssh_code-r0", 0, 0)
- !$omp parallel default(shared), private(i,j)
- CALL profile_psy_data_1%PreStart("psy_test27_loop_swap", ''' + \
- '''"invoke_loop1-bc_ssh_code-r1", 0, 0)
- !$omp do schedule(static)
- DO j = t%internal%ystart, t%internal%ystop, 1
- DO i = t%internal%xstart, t%internal%xstop, 1
- CALL bc_ssh_code(i, j, 1, t%data, t%grid%tmask)
- END DO
- END DO
- !$omp end do
- CALL profile_psy_data_1%PostEnd
- !$omp end parallel
- CALL profile_psy_data%PostEnd'''
+ code = fortran_writer(invoke.schedule)
+
+ correct = '''
+ CALL profile_psy_data % PreStart(\"psy_test27_loop_swap\", \
+"invoke_loop1-bc_ssh_code-r0", 0, 0)
+ !$omp parallel default(shared), private(i,j)
+ CALL profile_psy_data_1 % PreStart("psy_test27_loop_swap", \
+"invoke_loop1-bc_ssh_code-r1", 0, 0)
+ !$omp do schedule(static)
+ do j = t%internal%ystart, t%internal%ystop, 1
+ do i = t%internal%xstart, t%internal%xstop, 1
+ call bc_ssh_code(i, j, 1, t%data, t%grid%tmask)
+ enddo
+ enddo
+ !$omp end do
+ CALL profile_psy_data_1 % PostEnd
+ !$omp end parallel
+ CALL profile_psy_data % PostEnd'''
assert correct in code
diff --git a/src/psyclone/tests/psyir/transformations/read_only_verify_trans_test.py b/src/psyclone/tests/psyir/transformations/read_only_verify_trans_test.py
index 3ea4c2ee57..036268fb26 100644
--- a/src/psyclone/tests/psyir/transformations/read_only_verify_trans_test.py
+++ b/src/psyclone/tests/psyir/transformations/read_only_verify_trans_test.py
@@ -38,8 +38,6 @@
''' Module containing tests for ReadOnlyVerifyTrans and ReadOnlyVerifyNode
'''
-from __future__ import absolute_import
-
import pytest
from psyclone.errors import InternalError
@@ -100,12 +98,12 @@ def test_read_only_options():
'''Check that options are passed to the ReadOnly Node and trigger
the use of the newly defined names.
'''
- _, invoke = get_invoke("test11_different_iterates_over_one_invoke.f90",
- "gocean", idx=0, dist_mem=False)
+ psy, invoke = get_invoke("test11_different_iterates_over_one_invoke.f90",
+ "gocean", idx=0, dist_mem=False)
read_only = ReadOnlyVerifyTrans()
read_only.apply(invoke.schedule[0].loop_body[0],
options={"region_name": ("a", "b")})
- code = str(invoke.gen())
+ code = str(psy.gen)
assert 'CALL read_only_verify_psy_data % PreStart("a", "b", 4, 4)' in code
diff --git a/src/psyclone/tests/psyir/transformations/transformation_error_test.py b/src/psyclone/tests/psyir/transformations/transformation_error_test.py
index ac19b6e8f1..0fa78ce040 100644
--- a/src/psyclone/tests/psyir/transformations/transformation_error_test.py
+++ b/src/psyclone/tests/psyir/transformations/transformation_error_test.py
@@ -36,8 +36,6 @@
'''pytest tests for the transformation_errors module.'''
-from __future__ import absolute_import
-
from psyclone.errors import LazyString
from psyclone.psyir.transformations import TransformationError
diff --git a/src/psyclone/tests/psyir/transformations/transformations_test.py b/src/psyclone/tests/psyir/transformations/transformations_test.py
index ec4d66a42b..bd596570d0 100644
--- a/src/psyclone/tests/psyir/transformations/transformations_test.py
+++ b/src/psyclone/tests/psyir/transformations/transformations_test.py
@@ -219,7 +219,7 @@ def test_omptaskloop_getters_and_setters():
def test_omptaskloop_apply(monkeypatch):
- '''Check that the gen_code method in the OMPTaskloopDirective
+ '''Check that the lowering method in the OMPTaskloopDirective
class generates the expected code when passing options to
the OMPTaskloopTrans's apply method and correctly overrides the
taskloop's inbuilt value. Use the gocean API.
@@ -245,15 +245,15 @@ class generates the expected code when passing options to
clauses = " nogroup"
assert (
- f" !$omp parallel default(shared), private(i,j)\n"
- f" !$omp master\n"
- f" !$omp taskloop{clauses}\n"
- f" DO" in code)
+ f" !$omp parallel default(shared), private(i,j)\n"
+ f" !$omp master\n"
+ f" !$omp taskloop{clauses}\n"
+ f" do" in code)
assert (
- " END DO\n"
- " !$omp end taskloop\n"
- " !$omp end master\n"
- " !$omp end parallel" in code)
+ " enddo\n"
+ " !$omp end taskloop\n"
+ " !$omp end master\n"
+ " !$omp end parallel" in code)
assert taskloop_node.begin_string() == "omp taskloop"
diff --git a/src/psyclone/tests/psyir/transformations/value_range_check_trans_test.py b/src/psyclone/tests/psyir/transformations/value_range_check_trans_test.py
index 011ef8f4cf..d911a3befb 100644
--- a/src/psyclone/tests/psyir/transformations/value_range_check_trans_test.py
+++ b/src/psyclone/tests/psyir/transformations/value_range_check_trans_test.py
@@ -94,7 +94,7 @@ def test_value_range_check_basic():
# -----------------------------------------------------------------------------
-def test_value_range_check_options():
+def test_value_range_check_options(fortran_writer):
'''Check that options are passed to the ValueRangeCheckNode and trigger
the use of the newly defined names.
'''
@@ -103,7 +103,7 @@ def test_value_range_check_options():
value_range_check = ValueRangeCheckTrans()
value_range_check.apply(invoke.schedule[0].loop_body[0],
options={"region_name": ("a", "b")})
- code = str(invoke.gen())
+ code = fortran_writer(invoke.schedule)
assert 'CALL value_range_check_psy_data % PreStart("a", "b", 4, 2)' in code
@@ -175,10 +175,7 @@ def test_value_range_check_psyir_visitor(fortran_writer):
# -----------------------------------------------------------------------------
def test_value_range_check_lfric():
- '''Check that the value range check transformation works in LFRic.
- Use the old-style gen_code based implementation.
-
- '''
+ '''Check that the value range check transformation works in LFRic.'''
psy, invoke = get_invoke("1.2_multi_invoke.f90", api="lfric",
idx=0, dist_mem=False)
@@ -191,11 +188,13 @@ def test_value_range_check_lfric():
# (first line), and some declaration and provide variable before and
# after the kernel:
expected = [
- 'CALL value_range_check_psy_data%PreStart("multi_invoke_psy", '
+ 'CALL value_range_check_psy_data % PreStart("multi_invoke_psy", '
'"invoke_0-r0", 20, 2)',
- 'CALL value_range_check_psy_data%PreDeclareVariable("a", a)',
- 'CALL value_range_check_psy_data%ProvideVariable("m1_data", m1_data)',
- 'CALL value_range_check_psy_data%ProvideVariable("f1_data", f1_data)']
+ 'CALL value_range_check_psy_data % PreDeclareVariable("a", a)',
+ 'CALL value_range_check_psy_data % ProvideVariable("m1_data", '
+ 'm1_data)',
+ 'CALL value_range_check_psy_data % ProvideVariable("f1_data", '
+ 'f1_data)']
for line in expected:
assert line in code
diff --git a/src/psyclone/tests/utilities.py b/src/psyclone/tests/utilities.py
index 3486d676da..5c842d3153 100644
--- a/src/psyclone/tests/utilities.py
+++ b/src/psyclone/tests/utilities.py
@@ -337,19 +337,19 @@ def compile_file(self, filename, link=False):
raise CompileError(output)
def _code_compiles(self, psy_ast, dependencies=None):
- '''Attempts to build the Fortran code supplied as an AST of
- f2pygen objects. Returns True for success, False otherwise.
+ '''
+ Use the given PSy class to generate the necessary PSyKAl components
+ to compile the psy-layer. Returns True for success, False otherwise.
It is meant for internal test uses only, and must only be
called when compilation is actually enabled (use code_compiles
otherwse). All files produced are deleted.
- :param psy_ast: The AST of the generated PSy layer.
+ :param psy_ast: The PSy object to build.
:type psy_ast: :py:class:`psyclone.psyGen.PSy`
- :param dependencies: optional module- or file-names on which \
- one or more of the kernels/PSy-layer depend (and \
- that are not part of e.g. the GOcean or LFRic \
- infrastructure). These dependencies will be built \
- in the order they occur in this list.
+ :param dependencies: optional module- or file-names on which one or
+ more of the kernels/PSy-layer depend (and that are not part of
+ e.g. the GOcean or LFRic infrastructure). These dependencies will
+ be built in the order they occur in this list.
:type dependencies: List[str]
:return: True if generated code compiles, False otherwise.
@@ -365,12 +365,10 @@ def _code_compiles(self, psy_ast, dependencies=None):
for symbol in scope.symbol_table.containersymbols:
modules.add(symbol.name)
- # Not everything is captured by PSyIR yet (some API PSy-layers are
- # fully or partially f2pygen), in these cases we still need to
- # import the kernel modules used in these PSy-layers.
- # By definition, built-ins do not have associated Fortran modules.
- for call in invoke.schedule.coded_kernels():
- modules.add(call.module_name)
+ # Then also get all the CodedKernels used in all Invokes
+ for invoke in psy_ast.invokes.invoke_list:
+ for kernelcall in invoke.schedule.coded_kernels():
+ modules.add(kernelcall.module_name)
# Change to the temporary directory passed in to us from
# pytest. (This is a LocalPath object.)
@@ -381,7 +379,21 @@ def _code_compiles(self, psy_ast, dependencies=None):
# We limit the line lengths of the generated code so that
# we don't trip over compiler limits.
fll = FortLineLength()
- psy_file.write(fll.process(str(psy_ast.gen)))
+ code = str(psy_ast.gen)
+ psy_file.write(fll.process(code))
+
+ # Not all dependencies are captured by PSyIR as ContainerSymbols
+ # (e.g. multiple versions of coded kernels are not given a module
+ # name until code-generation dependening on what already exist in
+ # the filesystem), in these cases we take advantage that PSy-layer
+ # always use the _mod convention to look into the output code for
+ # these additional dependencies that we need to compile.
+ for name in code.split():
+ if name.endswith(('_mod', '_mod,')):
+ # Delete the , if the case of 'use name, only ...'
+ if name[-1] == ',':
+ name = name[:-1]
+ modules.add(name)
success = True
@@ -413,7 +425,7 @@ def _code_compiles(self, psy_ast, dependencies=None):
except IOError:
# Not all modules need to be found, for example API
# infrastructure modules will be provided already built.
- print(f"File {fort_file} not found for compilation.")
+ print(f"File '{fort_file}' not found for compilation.")
paths = [self.base_path, str(self._tmpdir)]
print(f"It was searched in: {paths}")
except CompileError:
@@ -430,12 +442,12 @@ def _code_compiles(self, psy_ast, dependencies=None):
return success
def code_compiles(self, psy_ast, dependencies=None):
- '''Attempts to build the Fortran code supplied as an AST of
- f2pygen objects. Returns True for success, False otherwise.
+ '''Attempts to build the Fortran code supplied as a PSy object.
+ Returns True for success, False otherwise.
If compilation is not enabled returns true. Uses _code_compiles
- for the actual compilation. All files produced are deleted.
+ for the actual compilation.
- :param psy_ast: The AST of the generated PSy layer.
+ :param psy_ast: The generated PSy layer.
:type psy_ast: :py:class:`psyclone.psyGen.PSy`
:param dependencies: optional module- or file-names on which \
one or more of the kernels/PSy-layer depend (and \
diff --git a/src/psyclone/transformations.py b/src/psyclone/transformations.py
index 3152437bbc..b8fb395610 100644
--- a/src/psyclone/transformations.py
+++ b/src/psyclone/transformations.py
@@ -319,8 +319,7 @@ def apply(self, node, options=None):
end do
!$OMP END TASKLOOP
- At code-generation time (when
- :py:meth:`OMPTaskloopDirective.gen_code` is called), this node must be
+ At code-generation time (when lowering is called), this node must be
within (i.e. a child of) an OpenMP SERIAL region.
If the keyword "nogroup" is specified in the options, it will cause a
@@ -675,8 +674,7 @@ def apply(self, node, options=None):
...
end do
- At code-generation time (when
- :py:meth:`psyclone.psyir.nodes.ACCLoopDirective.gen_code` is called),
+ At code-generation time (when lowering is called),
this node must be within (i.e. a child of) a PARALLEL region.
:param node: the supplied node to which we will apply the
@@ -747,9 +745,9 @@ def apply(self, node, options=None):
!$OMP END PARALLEL DO
:param node: the node (loop) to which to apply the transformation.
- :type node: :py:class:`psyclone.f2pygen.DoGen`
- :param options: a dictionary with options for transformations\
- and validation.
+ :type node: :py:class:`psyclone.psyir.nodes.Loop`
+ :param options: a dictionary with options for transformations
+ and validation.
:type options: Optional[Dict[str, Any]]
'''
self.validate(node, options=options)
@@ -1416,10 +1414,8 @@ def apply(self, node_list, options=None):
'''Apply the OMPSingleTrans transformation to the specified node in a
Schedule.
- At code-generation time this node must be within (i.e. a child of)
- an OpenMP PARALLEL region. Code generation happens when
- :py:meth:`OMPLoopDirective.gen_code` is called, or when the PSyIR
- tree is given to a backend.
+ At code-generation time (when lowering is called) this node must be
+ within (i.e. a child of) an OpenMP PARALLEL region.
If the keyword "nowait" is specified in the options, it will cause a
nowait clause to be added if it is set to True, otherwise no clause
diff --git a/tutorial/practicals/LFRic/building_code/1_simple_kernels/part1/README.md b/tutorial/practicals/LFRic/building_code/1_simple_kernels/part1/README.md
index 052f34dcfa..d2ae9842ca 100644
--- a/tutorial/practicals/LFRic/building_code/1_simple_kernels/part1/README.md
+++ b/tutorial/practicals/LFRic/building_code/1_simple_kernels/part1/README.md
@@ -227,7 +227,7 @@ argument lists for each kernel:
```fortran
!
- ! Call our kernels
+ ! Call kernels
!
DO cell=1,field_w0_proxy%vspace%get_ncell()
!
diff --git a/tutorial/practicals/LFRic/building_code/2_built_ins/README.md b/tutorial/practicals/LFRic/building_code/2_built_ins/README.md
index 1802e15dbb..310997ce53 100644
--- a/tutorial/practicals/LFRic/building_code/2_built_ins/README.md
+++ b/tutorial/practicals/LFRic/building_code/2_built_ins/README.md
@@ -300,7 +300,7 @@ The generated code for the specific mathematical operation here
```fortran
!
- ! Call our kernels
+ ! Call kernels
!
...
DO df=1,undf_aspc1_field_out_w0
diff --git a/tutorial/practicals/LFRic/distributed_memory/1_distributed_memory/README.md b/tutorial/practicals/LFRic/distributed_memory/1_distributed_memory/README.md
index 3ac2883609..d51f06e20c 100644
--- a/tutorial/practicals/LFRic/distributed_memory/1_distributed_memory/README.md
+++ b/tutorial/practicals/LFRic/distributed_memory/1_distributed_memory/README.md
@@ -73,14 +73,14 @@ in the description means `cells`!) with an upper loop bound of
```bash
InvokeSchedule[invoke='invoke_0', dm=False]
0: Loop[type='dofs', field_space='any_space_1', it_space='dof', upper_bound='ndofs']
- Literal[value:'NOT_INITIALISED', Scalar]
- Literal[value:'NOT_INITIALISED', Scalar]
+ Reference[name:'loop0_start']
+ Reference[name:'loop0_stop']
Literal[value:'1', Scalar]
Schedule[]
0: BuiltIn setval_c(grad_p,0.0_r_def)
1: Loop[type='', field_space='any_space_1', it_space='cell_column', upper_bound='ncells']
- Literal[value:'NOT_INITIALISED', Scalar]
- Literal[value:'NOT_INITIALISED', Scalar]
+ Reference[name:'loop1_start']
+ Reference[name:'loop1_stop']
Literal[value:'1', Scalar]
Schedule[]
0: CodedKern scaled_matrix_vector_code(grad_p,p,div_star,hb_inv) [module_inline=False]
@@ -98,7 +98,7 @@ Take a look at the generated psy-layer Fortran code:
As you will see, there is quite a bit of lookup code generated which
extracts the appropriate values from the infrastructure and the data
objects passed from the algorithm layer, however, the code performing
-the looping (after the `Call our kernels` comment) is relatively short
+the looping (after the `Call kernels` comment) is relatively short
and concise.
You should see that the upper bound for the builtin kernel loop is the
@@ -149,8 +149,8 @@ whilst the rest have `check_dirty=True`.
```bash
InvokeSchedule[invoke='invoke_0', dm=True]
0: Loop[type='dofs', field_space='any_space_1', it_space='dof', upper_bound='ndofs']
- Literal[value:'NOT_INITIALISED', Scalar]
- Literal[value:'NOT_INITIALISED', Scalar]
+ Reference[name:'loop0_start']
+ Reference[name:'loop0_stop']
Literal[value:'1', Scalar]
Schedule[]
0: BuiltIn setval_c(grad_p,0.0_r_def)
@@ -159,8 +159,8 @@ InvokeSchedule[invoke='invoke_0', dm=True]
3: HaloExchange[field='div_star', type='region', depth=1, check_dirty=True]
4: HaloExchange[field='hb_inv', type='region', depth=1, check_dirty=True]
5: Loop[type='', field_space='any_space_1', it_space='cell_column', upper_bound='cell_halo(1)']
- Literal[value:'NOT_INITIALISED', Scalar]
- Literal[value:'NOT_INITIALISED', Scalar]
+ Reference[name:'loop1_start']
+ Reference[name:'loop1_stop']
Literal[value:'1', Scalar]
Schedule[]
0: CodedKern scaled_matrix_vector_code(grad_p,p,div_star,hb_inv) [module_inline=False]
@@ -186,20 +186,16 @@ loops.
```fortran
! Call kernels and communication routines
- !
- DO df=1,grad_p_proxy%vspace%get_last_dof_owned()
+ do df=1,grad_p_proxy%vspace%get_last_dof_owned()
grad_p_proxy%data(df) = 0.0_r_def
- END DO
- !
+ enddo
+
! Set halos dirty/clean for fields modified in the above loop
- !
- CALL grad_p_proxy%set_dirty()
- !
- CALL grad_p_proxy%halo_exchange(depth=1)
- !
- IF (p_proxy%is_dirty(depth=1)) THEN
- CALL p_proxy%halo_exchange(depth=1)
- END IF
+ call grad_p_proxy%set_dirty()
+ call grad_p_proxy%halo_exchange(depth=1)
+ if (p_proxy%is_dirty(depth=1)) then
+ call p_proxy%halo_exchange(depth=1)
+ end if
...
```
diff --git a/tutorial/practicals/LFRic/distributed_memory/3_overlapping_comms/README.md b/tutorial/practicals/LFRic/distributed_memory/3_overlapping_comms/README.md
index 88154d5758..a0b138201f 100644
--- a/tutorial/practicals/LFRic/distributed_memory/3_overlapping_comms/README.md
+++ b/tutorial/practicals/LFRic/distributed_memory/3_overlapping_comms/README.md
@@ -138,8 +138,8 @@ You will see that all halo exchanges have been converted to asynchronous halo ex
```bash
InvokeSchedule[invoke='invoke_0', dm=True]
0: Loop[type='dofs', field_space='any_space_1', it_space='dof', upper_bound='ndofs']
- Literal[value:'NOT_INITIALISED', Scalar]
- Literal[value:'NOT_INITIALISED', Scalar]
+ Reference[name:'loop0_start']
+ Reference[name:'loop0_stop']
Literal[value:'1', Scalar]
Schedule[]
0: BuiltIn setval_c(grad_p,0.0_r_def)
@@ -225,8 +225,8 @@ output. For example:
```bash
InvokeSchedule[invoke='invoke_0', dm=True]
0: Loop[type='dofs', field_space='any_space_1', it_space='dof', upper_bound='ndofs']
- Literal[value:'NOT_INITIALISED', Scalar]
- Literal[value:'NOT_INITIALISED', Scalar]
+ Reference[name:'loop0_start']
+ Reference[name:'loop0_stop']
Literal[value:'1', Scalar]
Schedule[]
0: BuiltIn setval_c(grad_p,0.0_r_def)