| parfor | Execute loop iterations in parallel |
| parpool | Create parallel pool on cluster |
| delete (Pool) | Shut down parallel pool |
| gcp | Get current parallel pool |
| parallel.pool.Constant | Build parallel.pool.Constant from data or function handle |
| addAttachedFiles | Attach files or folders to parallel pool |
| updateAttachedFiles | Update attached files or folders on parallel pool |
| listAutoAttachedFiles | List of files automatically attached to job, task, or parallel pool |
| parfeval | Execute function asynchronously on parallel pool worker |
| parfevalOnAll | Execute function asynchronously on all workers in parallel pool |
| fetchOutputs (FevalFuture) | Retrieve all output arguments from Future |
| fetchNext | Retrieve next available unread FevalFuture outputs |
| cancel (FevalFuture) | Cancel queued or running future |
| isequal (FevalFuture) | True if futures have same ID |
| wait (FevalFuture) | Wait for futures to complete |
| parallel.Pool | Access parallel pool |
| gpuArray | Create array on GPU |
| gather | Transfer distributed array or gpuArray to local workspace |
| existsOnGPU | Determine if gpuArray or CUDAKernel is available on GPU |
| gpuDevice | Query or select GPU device |
| gpuDeviceCount | Number of GPU devices present |
| gputimeit | Time required to run function on GPU |
| reset | Reset GPU device and clear its memory |
| wait (GPUDevice) | Wait for GPU calculation to complete |
| arrayfun | Apply function to each element of array on GPU |
| bsxfun | Binary singleton expansion function for gpuArray |
| pagefun | Apply function to each page of array on GPU |
| mexcuda | Compile MEX-function for GPU computation |
| parallel.gpu.CUDAKernel | Create GPU CUDA kernel object from PTX and CU code |
| feval | Evaluate kernel on GPU |
| setConstantMemory | Set some constant memory on GPU |
| gpuArray | Array stored on GPU |
| GPUDevice | Graphics processing unit (GPU) |
| CUDAKernel | Kernel executable on GPU |
| distributed | Create distributed array from data in client workspace |
| gather | Transfer distributed array or gpuArray to local workspace |
| spmd | Execute code in parallel on workers of parallel pool |
| Composite | Create Composite object |
| parallel.pool.Constant | Build parallel.pool.Constant from data or function handle |
| codistributed | Create codistributed array from replicated local data |
| parpool | Create parallel pool on cluster |
| delete (Pool) | Shut down parallel pool |
| redistribute | Redistribute codistributed array with another distribution scheme |
| codistributed.build | Create codistributed array from distributed data |
| for | for-loop over distributed range |
| getLocalPart | Local portion of codistributed array |
| globalIndices | Global indices for local part of codistributed array |
| gop | Global operation across all workers |
| distributed | Access elements of distributed arrays from client |
| codistributed | Access elements of arrays distributed among workers in parallel pool |
| Composite | Access nondistributed variables on multiple workers from client |
| codistributor1d | 1-D distribution scheme for codistributed array |
| codistributor2dbc | 2-D block-cyclic distribution scheme for codistributed array |
| parallel.Pool | Access parallel pool |
| mapreduce | Programming technique for analyzing data sets that do not fit in memory |
| mapreducer | Define parallel execution environment for mapreduce |
| partition | Partition a datastore |
| numpartitions | Number of partitions |
| parpool | Create parallel pool on cluster |
| gcp | Get current parallel pool |
| parallel.Pool | Access parallel pool |
| parallel.cluster.Hadoop | Hadoop cluster for mapreducer |
| parcluster | Create cluster object |
| parpool | Create parallel pool on cluster |
| gcp | Get current parallel pool |
| parallel.defaultClusterProfile | Examine or set default cluster profile |
| parallel.exportProfile | Export one or more profiles to file |
| parallel.importProfile | Import cluster profiles from file |
| saveProfile | Save modified cluster properties to its current profile |
| saveAsProfile | Save cluster properties to specified profile |
| pctconfig | Configure settings for Parallel Computing Toolbox client session |
| parallel.Pool | Access parallel pool |
| parallel.Cluster | Access cluster properties and behaviors |
| parcluster | Create cluster object |
| batch | Run MATLAB script or function on worker |
| createJob | Create independent job on cluster |
| createCommunicatingJob | Create communicating job on cluster |
| recreate | Create new job from existing job |
| createTask | Create new task in job |
| parallel.defaultClusterProfile | Examine or set default cluster profile |
| parallel.importProfile | Import cluster profiles from file |
| poolStartup | File for user-defined options to run on each worker when parallel pool starts |
| jobStartup | File for user-defined options to run when job starts |
| taskStartup | User-defined options to run on worker when task starts |
| taskFinish | User-defined options to run on worker when task finishes |
| pctconfig | Configure settings for Parallel Computing Toolbox client session |
| mpiLibConf | Location of MPI implementation |
| mpiSettings | Configure options for MPI communication |
| pctRunOnAll | Run command on client and all workers in parallel pool |
| parallel.Cluster | Access cluster properties and behaviors |
| parallel.Job | Access job properties and behaviors |
| parallel.Task | Access task properties and behaviors |
| pause | Pause MATLAB job scheduler queue |
| resume | Resume processing queue in MATLAB job scheduler |
| cancel | Cancel job or task |
| delete | Remove job or task object from cluster and memory |
| promote | Promote job in MJS cluster queue |
| demote | Demote job in cluster queue |
| changePassword | Prompt user to change MJS password |
| logOut | Log out of MJS cluster |
| findJob | Find job objects stored in cluster |
| findTask | Task objects belonging to job object |
| getDebugLog | Read output messages from job run in CJS cluster |
| getJobClusterData | Get specific user data for job on generic cluster |
| setJobClusterData | Set specific user data for job on generic cluster |
| labindex | Index of this worker |
| numlabs | Total number of workers operating in parallel on current job |
| gcat | Global concatenation |
| gop | Global operation across all workers |
| gplus | Global addition |
| pload | Load file into parallel session |
| psave | Save data from communicating job session |
| labBarrier | Block execution until all workers reach this call |
| labBroadcast | Send data to all workers or receive data sent to all workers |
| labProbe | Test to see if messages are ready to be received from other worker |
| labReceive | Receive data from another worker |
| labSend | Send data to another worker |
| labSendReceive | Simultaneously send data to and receive data from another worker |
| getCurrentJob | Job object whose task is currently being evaluated |
| getCurrentCluster | Cluster object that submitted current task |
| getCurrentTask | Task object currently being evaluated in this worker session |
| getCurrentWorker | Worker object currently running this session |
| getAttachedFilesFolder | Folder into which AttachedFiles are written |
| parallel.Task | Access task properties and behaviors |
| parallel.Worker | Access worker that ran task |
| pmode | Interactive Parallel Command Window |
| mpiprofile | Profile parallel communication and execution times |