parpool

Create parallel pool on cluster

Syntax

parpool
parpool(poolsize)
parpool(profilename)
parpool(profilename,poolsize)
parpool(cluster)
parpool(cluster,poolsize)
parpool(___,Name,Value)
poolobj = parpool(___)

Description

example

parpool enables the full functionality of the parallel language features (parfor and spmd) in MATLAB® by creating a special job on a pool of workers, and connecting the MATLAB client to the parallel pool. If possible, the working folder on the workers is set to match that of the MATLAB client session.

parpool starts a pool using the default cluster profile, with the pool size specified by your parallel preferences and the default profile.

example

parpool(poolsize) overrides the number of workers specified in the preferences or profile, and starts a pool of exactly that number of workers, even if it has to wait for them to be available. Most clusters have a maximum number of workers they can start. If the profile specifies a MATLAB job scheduler (MJS) cluster, parpool reserves its workers from among those already running and available under that MJS. If the profile specifies a local or third-party scheduler, parpool instructs the scheduler to start the workers for the pool.

example

parpool(profilename) or parpool(profilename,poolsize) starts a worker pool using the cluster profile identified by profilename.

example

parpool(cluster) or parpool(cluster,poolsize) starts a worker pool on the cluster specified by the cluster object cluster.

example

parpool(___,Name,Value) applies the specified values for certain properties when starting the pool.

example

poolobj = parpool(___) returns a parallel.Pool object to the client workspace representing the pool on the cluster. You can use the pool object to programmatically delete the pool or to access its properties.

Examples

collapse all

Start a parallel pool using the default profile to define the number of workers.

parpool

Start a parallel pool of 16 workers using a profile called myProf.

parpool('myProf',16)

Start a parallel pool of 2 workers using the local profile.

parpool('local',2)

Create an object representing the cluster identified by the default profile, and use that cluster object to start a parallel pool. The pool size is determined by the default profile.

c = parcluster
parpool(c)

Start a parallel pool with the default profile, and pass two code files to the workers.

parpool('AttachedFiles',{'mod1.m','mod2.m'})

Create a parallel pool with the default profile, and later delete the pool.

poolobj = parpool;

delete(poolobj)

Find the number of workers in the current parallel pool.

poolobj = gcp('nocreate'); % If no pool, do not create new one.
if isempty(poolobj)
    poolsize = 0;
else
    poolsize = poolobj.NumWorkers
end

Input Arguments

collapse all

Size of the parallel pool, specified as a numeric value.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Profile that defines cluster and properties, specified as a character vector.

Example:

Data Types: char

Cluster to start pool on, specified as a cluster object

Example: c = parcluster();

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'AttachedFiles',{'myFun.m'}

collapse all

Files to attach to pool, specified as a character vector or cell array of character vectors.

With this argument pair, parpool starts a parallel pool and passes the identified files to the workers in the pool. The files specified here are appended to the AttachedFiles property specified in the applicable parallel profile to form the complete list of attached files. The 'AttachedFiles' property name is case sensitive, and must appear as shown.

Example: {'myFun.m','myFun2.m'}

Data Types: char | cell

A logical value (true or false) that controls whether user-added-entries on the client's path are added to each worker's path at startup. By default 'AutoAddClientPath' is set to true.

Data Types: Boolean

Names of environment variables to be copied from the client session to the workers, specified as a character vector, or cell array of character vectors. The names specified here are appended to the 'EnvironmentVariables' property specified in the applicable parallel profile to form the complete list of environment variables. Any variables listed which are not set are not copied to the workers. These environment variables are set on the workers for the duration of the parallel pool.

Data Types: char | cell

Indication if pool is enabled to support SPMD, specified as a logical. You can disable support only on a local or MJS cluster. Because parfor iterations do not involve interworker communication, disabling SPMD support this way allows the parallel pool to keep evaluating a parfor-loop even if one or more workers aborts during loop execution.

Data Types: logical

Output Arguments

collapse all

Access to parallel pool from client, returned as a parallel.Pool object.

Tips

Note

Remove any startup.m from your MATLAB path if you want to run any parallel code including parpool. If you have trouble starting the parallel pool, see this MATLAB Answers page: http://uk.mathworks.com/matlabcentral/answers/92124-why-am-i-unable-to-use-parpool-with-the-local-scheduler-or-validate-my-local-configuration-of-parall

  • The pool status indicator in the lower-left corner of the desktop shows the client session connection to the pool and the pool status. Click the icon for a menu of supported pool actions.

    With a pool running: With no pool running:

  • If you set your parallel preferences to automatically create a parallel pool when necessary, you do not need to explicitly call the parpool command. You might explicitly create a pool to control when you incur the overhead time of setting it up, so the pool is ready for subsequent parallel language constructs.

  • delete(poolobj) shuts down the parallel pool. Without a parallel pool, spmd and parfor run as a single thread in the client, unless your parallel preferences are set to automatically start a parallel pool for them.

  • When you use the MATLAB editor to update files on the client that are attached to a parallel pool, those updates automatically propagate to the workers in the pool. (This automatic updating does not apply to Simulink® model files. To propagate updated model files to the workers, use the updateAttachedFiles function.)

  • If possible, the working folder on the workers is initially set to match that of the MATLAB client session. Subsequently, the following commands entered in the client Command Window also execute on all the workers in the pool:

    This behavior allows you to set the working folder and the command search path on all the workers, so that subsequent pool activities such as parfor-loops execute in the proper context.

    When changing folders or adding a path with cd or addpath on clients with Windows® operating systems, the value sent to the workers is the UNC path for the folder if possible. For clients with Linux® operating systems, it is the absolute folder location.

    If any of these commands does not work on the client, it is not executed on the workers either. For example, if addpath specifies a folder that the client cannot access, the addpath command is not executed on the workers. However, if the working folder can be set on the client, but cannot be set as specified on any of the workers, you do not get an error message returned to the client Command Window.

    Be careful of this slight difference in behavior in a mixed-platform environment where the client is not the same platform as the workers, where folders local to or mapped from the client are not available in the same way to the workers, or where folders are in a nonshared file system. For example, if you have a MATLAB client running on a Microsoft® Windows operating system while the MATLAB workers are all running on Linux operating systems, the same argument to addpath cannot work on both. In this situation, you can use the function pctRunOnAll to assure that a command runs on all the workers.

    Another difference between client and workers is that any addpath arguments that are part of the matlabroot folder are not set on the workers. The assumption is that the MATLAB install base is already included in the workers’ paths. The rules for addpath regarding workers in the pool are:

    • Subfolders of the matlabroot folder are not sent to the workers.

    • Any folders that appear before the first occurrence of a matlabroot folder are added to the top of the path on the workers.

    • Any folders that appear after the first occurrence of a matlabroot folder are added after the matlabroot group of folders on the workers’ paths.

    For example, suppose that matlabroot on the client is C:\Applications\matlab\. With an open parallel pool, execute the following to set the path on the client and all workers:

    addpath('P1',
            'P2',
            'C:\Applications\matlab\T3',
            'C:\Applications\matlab\T4',
            'P5',
            'C:\Applications\matlab\T6',
            'P7',
            'P8');

    Because T3, T4, and T6 are subfolders of matlabroot, they are not set on the workers’ paths. So on the workers, the pertinent part of the path resulting from this command is:

    P1
    P2
    <worker original matlabroot folders...>
    P5
    P7
    P8

Introduced in R2013b

Was this topic helpful?