To quickly install Covalent and run a short demo, follow the four steps below.
Before you start
Ensure you are using a compatible OS and Python version. See the Compatibility page for supported Python versions and operating systems.
1. Use Pip to install the Covalent server and libraries locally.
Type the following in a terminal window:
$pipinstallcovalent
2. Start the Covalent server.
In the terminal window, type:
$ covalentstart
Covalent server has started at http://localhost:48008
3. Run a workflow.
Open a Jupyter notebook or Python console and run the following Python code:
importcovalentasct# Construct manageable tasks out of functions# by adding the @ct.electron decorator@ct.electrondefadd(x,y):returnx+y@ct.electrondefmultiply(x,y):returnx*y# Note that electrons can be shipped to variety of compute# backends using executors, for example, "local" computer.# See below for other common executors.@ct.electron(executor="local")defdivide(x,y):returnx/y# Construct the workflow by stitching together# the electrons defined earlier in a function with# the @ct.lattice decorator@ct.latticedefworkflow(x,y):r1=add(x,y)r2=[multiply(r1,y)for_inrange(4)]r3=[divide(x,value)forvalueinr2]returnr3# Dispatch the workflowdispatch_id=ct.dispatch(workflow)(1,2)result=ct.get_result(dispatch_id)print(result)
4. View the workflow progress.
Navigate to the Covalent UI at http://localhost:48008 to see your workflow in the queue:
Click on the dispatch ID to view the workflow graph:
Note that the computed result is displayed in the Overview.
The following code snippets show the syntax for some of the most popular features within Covalent. Use this as a quick reference, or navigate to further examples in the How-To Guide.
Executors are included in Electron and Lattice decorators to denote where tasks should run. Note that most plugins must be installed as separate Python packages.
Slurm Executor
The Slurm executor generates a batch submission script and interacts with the Slurm scheduler on the user’s behalf.
File transfers are often used to keep large data files close to the compute where they are used. Covalent supports transferring files to/from arbitrary servers using a generic Rsync strategy, as well as to/from all of the major cloud storage options.
Rsync transfers
Rsync is a generic transfer strategy which uses SSH to authenticate to a remote server. Typically this is used to interact with NAS (Network Attached Storage) systems.
rsync=ct.fs_strategies.Rsync(username="user",host="storage.address.com",private_key_path="~/.ssh/id_rsa",)input_file=ct.fs.TransferFromRemote("file:///path/to/remote/input","file:///path/to/local/input",strategy=rsync,)output_file=ct.fs.TransferToRemote("file:///path/to/remote/output","file:///path/to/local/output",strategy=rsync,)@ct.electron(files=[input_file,output_file])deftask(files):# input_file can be accessed at /path/to/local/input# output_file should be written to /path/to/local/output...
Covalent allows task dependencies to be specified in the task metadata. When a task runs, it first validates these dependencies are installed, or attempts to install them if they are missing.
Pip Dependencies
Pip dependencies allow users to specify Python packages which are managed by the Pip package-management system.
Dynamic workflows allow users to construct dynamic execution patterns based on the outputs of upstream tasks. Advanced users can use these to include conditional logic, to control the degree of parallelism, and to perform real-time scheduling.
Conditional Workflow Logic
Conditional logic includes if/else, for, and while statements.
Hardware selection at runtime allows users to pick resources within a compute backend at runtime. This can be useful when dynamically deciding to add hardware accelerators such as GPUs.
@ct.electrondefget_problem_size():...deftask():...@ct.electrondefschedule(problem_size,threshold):executor_args={...options={"time":"01:00:00"}}# Request a GPU for large computational problemsifproblem_size>threshold:executor_args["options"]["gres"]="gpu:v100:1"else:executor_args["options"]["cpus-per-task"]=4returnct.executor.SlurmExecutor(**executor_args)@ct.electron@ct.latticedefdynamic_sublattice(problem_size):threshold=10**6returnct.electron(task,executor=schedule(problem_size,threshold))()@ct.latticedefworkflow():problem_size=get_problem_size()returndynamic_sublattice(problem_size)
Cloudbursting
Cloudbursting is a form of dynamic workflow used in conjunction with multiple executors, where the scheduling decision is made at runtime.
deftask():...electrons={"slurm":ct.electron(task,executor=slurm),"azure":ct.electron(task,executor=azure),}@ct.electrondefschedule(num_cpu):# Query remote backends for availability# Return either "slurm" or "azure"...@ct.electron@ct.latticedefdynamic_sublattice(backend):returnelectrons[backend]()@ct.latticedefworkflow(num_cpu):backend=schedule(num_cpu)returndynamic_sublattice(backend)