Workflows

workchainaiida_cp2k.workchains.base.Cp2kBaseWorkChain

Workchain to run a CP2K calculation with automated error handling and restarts.

Inputs:

  • clean_workdir, Bool, optional – If True, work directories of all called calculation jobs will be cleaned at the end of execution.
  • cp2k, Namespace
    Namespace Ports
    • basissets, Namespace – A dictionary of basissets to be used in the calculations: key is the atomic symbol, value is either a single basisset or a list of basissets. If multiple basissets for a single symbol are passed, it is mandatory to specify a KIND section with a BASIS_SET keyword matching the names (or aliases) of the basissets.
    • code, Code, required – The Code to use for this job.
    • file, Namespace – additional input files
    • metadata, Namespace
      Namespace Ports
      • call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
      • computer, Computer, optional, non_db – When using a “local” code, set the computer on which the calculation should be run.
      • description, str, optional, non_db – Description to set on the process node.
      • dry_run, bool, optional, non_db – When set to True will prepare the calculation job for submission but not actually launch it.
      • label, str, optional, non_db – Label to set on the process node.
      • options, Namespace
        Namespace Ports
        • account, str, optional, non_db – Set the account to use in for the queue on the remote computer
        • append_text, str, optional, non_db – Set the calculation-specific append text, which is going to be appended in the scheduler-job script, just after the code execution
        • custom_scheduler_commands, str, optional, non_db – Set a (possibly multiline) string with the commands that the user wants to manually set for the scheduler. The difference of this option with respect to the prepend_text is the position in the scheduler submission file where such text is inserted: with this option, the string is inserted before any non-scheduler command
        • environment_variables, dict, optional, non_db – Set a dictionary of custom environment variables for this calculation
        • import_sys_environment, bool, optional, non_db – If set to true, the submission script will load the system environment variables
        • input_filename, str, optional, non_db
        • max_memory_kb, int, optional, non_db – Set the maximum memory (in KiloBytes) to be asked to the scheduler
        • max_wallclock_seconds, int, optional, non_db – Set the wallclock in seconds asked to the scheduler
        • mpirun_extra_params, (list, tuple), optional, non_db – Set the extra params to pass to the mpirun (or equivalent) command after the one provided in computer.mpirun_command. Example: mpirun -np 8 extra_params[0] extra_params[1] … exec.x
        • output_filename, str, optional, non_db
        • parser_name, str, optional, non_db
        • prepend_text, str, optional, non_db – Set the calculation-specific prepend text, which is going to be prepended in the scheduler-job script, just before the code execution
        • priority, str, optional, non_db – Set the priority of the job to be queued
        • qos, str, optional, non_db – Set the quality of service to use in for the queue on the remote computer
        • queue_name, str, optional, non_db – Set the name of the queue on the remote computer
        • resources, dict, required, non_db – Set the dictionary of resources to be used by the scheduler plugin, like the number of nodes, cpus etc. This dictionary is scheduler-plugin dependent. Look at the documentation of the scheduler for more details.
        • scheduler_stderr, str, optional, non_db – Filename to which the content of stderr of the scheduler is written.
        • scheduler_stdout, str, optional, non_db – Filename to which the content of stdout of the scheduler is written.
        • submit_script_filename, str, optional, non_db – Filename to which the job submission script is written.
        • withmpi, bool, optional, non_db
      • store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.
    • parameters, Dict, required – the input parameters
    • parent_calc_folder, RemoteData, optional – remote folder used for restarts
    • pseudos, Namespace – A dictionary of pseudopotentials to be used in the calculations: key is the atomic symbol, value is either a single pseudopotential or a list of pseudopotentials. If multiple pseudos for a single symbol are passed, it is mandatory to specify a KIND section with a PSEUDOPOTENTIAL keyword matching the names (or aliases) of the pseudopotentials.
    • resources, dict, optional – special settings
    • settings, Dict, optional – additional input parameters
    • structure, StructureData, optional – the main input structure
  • handler_overrides, Dict, optional – Mapping where keys are process handler names and the values are a boolean, where True will enable the corresponding handler and False will disable it. This overrides the default value set by the enabled keyword of the process_handler decorator with which the method is decorated.
  • max_iterations, Int, optional – Maximum number of iterations the work chain will restart the process to finish successfully.
  • metadata, Namespace
    Namespace Ports
    • call_link_label, str, optional, non_db – The label to use for the CALL link if the process is called by another process.
    • description, str, optional, non_db – Description to set on the process node.
    • label, str, optional, non_db – Label to set on the process node.
    • store_provenance, bool, optional, non_db – If set to False provenance will not be stored in the database.

Outputs:

  • output_bands, BandsData, optional – optional band structure
  • output_parameters, Dict, required – the results of the calculation
  • output_structure, StructureData, optional – optional relaxed structure
  • remote_folder, RemoteData, required – Input files necessary to run the process will be stored in this folder node.
  • retrieved, FolderData, required – Files that are retrieved by the daemon will be stored in this node. By default the stdout and stderr of the scheduler will be added, but one can add more by specifying them in CalcInfo.retrieve_list.

Outline:

setup(Call the `setup` of the `BaseRestartWorkChain` and then create the inputs dictionary in `self.ctx.inputs`. This `self.ctx.inputs` dictionary will be used by the `BaseRestartWorkChain` to submit the calculations in the internal loop.)
while(should_run_process)
    run_process(Run the next process, taking the input dictionary from the context at `self.ctx.inputs`.)
    inspect_process(Analyse the results of the previous process and call the handlers when necessary. If the process is excepted or killed, the work chain will abort. Otherwise any attached handlers will be called in order of their specified priority. If the process was failed and no handler returns a report indicating that the error was handled, it is considered an unhandled process failure and the process is relaunched. If this happens twice in a row, the work chain is aborted. In the case that at least one handler returned a report the following matrix determines the logic that is followed: Process  Handler    Handler     Action result   report?    exit code ----------------------------------------- Success      yes        == 0     Restart Success      yes        != 0     Abort Failed       yes        == 0     Restart Failed       yes        != 0     Abort If no handler returned a report and the process finished successfully, the work chain's work is considered done and it will move on to the next step that directly follows the `while` conditional, if there is one defined in the outline.)
results(Attach the outputs specified in the output specification from the last completed process.)