API Usage Tutorial

Cloudera Manager Concepts

The API terminology is similar to that used in the web UI:

Cluster

A cluster is a set of hosts running interdependent services. All services in a cluster have the same CDH version. A Cloudera Manager installation may have multiple clusters, which are uniquely identified by different names.

You can issue commands against a cluster.

Service

A service is an abstract entity providing a capability in a cluster. Examples of services are HDFS, MapReduce, YARN, and HBase. A service is usually distributed, and contains a set of roles that physically run on the cluster. A service has its own configuration, status and roles. You may issue commands against a service, or against a set of roles in bulk. Additionally, an HDFS service has nameservices, and a MapReduce service has activities.

All services belong to a cluster (except for the Cloudera Management Service), and is uniquely identified by its name within a Cloudera Manager installation. The types of services available depends on the CDH version of the cluster.

Role

A role performs specific actions for a service, and is assigned to a host. It usually runs as a daemon process, such as a DataNode or a TaskTracker. (There are exceptions; not all roles are daemon processes.) Once created, a role cannot be reassigned to a different host. You need to delete and re-create it.

A role has its own configuration and status. API commands on roles are always issued in bulk at the service level.

Role Type

Role type refers to the class that a role belongs to. For example, an HBase service has the Master role type and the RegionServer role type. Different service types have different sets of role types. This is not to be confused with a role, which refers to a specific role instance that is physically assigned to a host.

You can specify configuration for a role type, which is inherited by all role instances of that type.

Host

The Cloudera Manager Agent runs on hosts that are managed by Cloudera Manager. You can assign service roles to hosts.

Cloudera Manager

Everything related to the operation of Cloudera Manager is available under the /cm resource. This includes global commands, system configuration, and the Cloudera Management Service.

Cloudera Management Service

Only available in the Enterprise Edition, the Management Service provides monitoring, diagnostic and reporting features for your Hadoop clusters. The operation of this service is similar to other Hadoop services, except that the Management Service does not belong to a cluster.

Metrics

A metric is a property that can be measured to quantify the state of an entity or activity, such as the number of open file descriptors or CPU utilization percentage. Full list of metric schema is available through Cloudera Manager API /timeseries/schema endpoint.

Cloudera Manager enables retrieving of metric data using a launguage called tsquery. Please see tsquery documentation for more details on how to write a tsquery.

Debugging the API

You may enable debug logging in Cloudera Manager for API related activities. The setting is called "Enable Debugging of API" on the Administration page of the Cloudera Manager Admin Console. When enabled, the Cloudera Manager server log will contain full traces of all API requests and responses, and the debug logging of the request handling. Due to the large volumn of log data this may generate, you should enable it only during development.

API Usage Examples

The following examples use curl without a cookie jar, for ease of cut-n-paste. But note that it is an inefficient way to authenticate.

Explore Around

What clusters do we have? $ curl -u admin:admin 'http://localhost:7180/api/v1/clusters' { "items" : [ { "name" : "Cluster 1 - CDH4", "version" : "CDH4" }, { "name" : "Cluster 2 - CDH3", "version" : "CDH3" } ] }

This shows the services running in a cluster, with status and health information (in the Enterprise Edition). Abridged output: $ curl -u admin:admin \ 'http://localhost:7180/api/v1/clusters/Cluster%201%20-%20CDH4/services' { "items" : [ { "name" : "hdfs1", "type" : "HDFS", "configStale" : false, "clusterRef" : { "clusterName" : "Cluster 1 - CDH4" }, "serviceState" : "STARTED", "healthSummary" : "GOOD", "healthChecks" : [ { "name" : "HDFS_CORRUPT_BLOCKS", "summary" : "GOOD" }, { "name" : "HDFS_DATA_NODES_HEALTHY", "summary" : "GOOD" }, { "name" : "HDFS_MISSING_BLOCKS", "summary" : "GOOD" }, { "name" : "HDFS_HA_NAMENODE_HEALTH", "summary" : "GOOD" }, { "name" : "HDFS_UNDER_REPLICATED_BLOCKS", "summary" : "GOOD" }, { "name" : "HDFS_CANARY_HEALTH", "summary" : "GOOD" }, { "name" : "HDFS_FREE_SPACE_REMAINING", "summary" : "GOOD" }, { "name" : "HDFS_UPGRADE_STATUS", "summary" : "GOOD" }, { "name" : "HDFS_STANDBY_NAMENODES_HEALTHY", "summary" : "GOOD" } ] }, { "name" : "mapreduce1", "type" : "MAPREDUCE", "configStale" : false, "clusterRef" : { "clusterName" : "Cluster 1 - CDH4" }, "serviceState" : "STARTED", "healthSummary" : "GOOD", ...

This shows the custom configuration of hdfs1 and all the role types. Config params with default values are excluded, and only shown in the "full" view. $ curl -u admin:admin \ 'http://localhost:7180/api/v1/clusters/Cluster%201%20-%20CDH4/services/hdfs1/config' { "roleTypeConfigs" : [ { "roleType" : "DATANODE", "items" : [ { "name" : "dfs_data_dir_list", "value" : "/dfs/dn" } ] }, { "roleType" : "NAMENODE", "items" : [ { "name" : "dfs_name_dir_list", "value" : "/dfs/nn" } ] }, { "roleType" : "SECONDARYNAMENODE", "items" : [ { "name" : "fs_checkpoint_dir_list", "value" : "/dfs/snn" } ] }, { "roleType" : "BALANCER", "items" : [ ] }, { "roleType" : "GATEWAY", "items" : [ ] }, { "roleType" : "HTTPFS", "items" : [ ] }, { "roleType" : "FAILOVERCONTROLLER", "items" : [ ] } ], "items" : [ { "name" : "zookeeper_service", "value" : "zookeeper1" } ] }

The full configuration view shows all parameters with description. Abridged output: $ curl -u admin:admin \ 'http://localhost:7180/api/v1/clusters/Cluster%201%20-%20CDH4/services/hdfs1/config?view=full' { "roleTypeConfigs" : [ { "roleType" : "DATANODE", "items" : [ { "name" : "dfs_data_dir_list", "value" : "/dfs/dn", "required" : true, "displayName" : "DataNode Data Directory", "description" : "Comma-delimited list of directories on the local file system where the DataNode stores HDFS block data. Typical values are /data/N/dfs/dn for N = 1, 2, 3... These directories should be mounted using the noatime option and the disks should be configured using JBOD. RAID is not recommended.", "relatedName" : "dfs.datanode.data.dir", "validationState" : "OK" }, { "name" : "hadoop_metrics_dir", "required" : false, "displayName" : "Hadoop Metrics Output Directory", "description" : "If using FileContext, directory to write metrics to.", "validationState" : "OK", "default" : "/tmp/metrics" }, { "name" : "dfs_datanode_http_port", "required" : false, "displayName" : "DataNode HTTP Web UI Port", "description" : "Port for the DataNode HTTP web UI. Combined with the DataNode's hostname to build its HTTP address.", "relatedName" : "dfs.datanode.http.address", "validationState" : "OK", "default" : "50075" }, { ...

Add a New Service and Roles

This adds a new HBase service called "my_hbase". The API input is a list of services, for bulk operation. Even though the call creates only one service, it still passes in a list (with one item). The API returns the newly created service. $ curl -X POST -H "Content-Type:application/json" -u admin:admin \ -d '{ "items": [ { "name": "my_hbase", "type": "HBASE" } ] }' \ 'http://localhost:7180/api/v1/clusters/Cluster%201%20-%20CDH4/services' { "items" : [ { "name" : "my_hbase", "type" : "HBASE", "configStale" : false, "clusterRef" : { "clusterName" : "Cluster 1 - CDH4" }, "serviceState" : "STOPPED" } ] }

This creates a Master and a RegionServer roles. The API returns the newly created roles. $ curl -X POST -H "Content-Type:application/json" -u admin:admin \ -d '{"items": [ { "name": "master1", "type": "MASTER", "hostRef": { "hostId": "localhost" } }, { "name": "rs1", "type": "REGIONSERVER", "hostRef": { "hostId": "localhost" } } ] }' \ 'http://localhost:7180/api/v1/clusters/Cluster%201%20-%20CDH4/services/my_hbase/roles' { "items" : [ { "name" : "master1", "type" : "MASTER", "configStale" : false, "hostRef" : { "hostId" : "localhost" }, "roleState" : "STOPPED", "serviceRef" : { "serviceName" : "my_hbase", "clusterName" : "Cluster 1 - CDH4" } }, { "name" : "rs1", "type" : "REGIONSERVER", "configStale" : false, "hostRef" : { "hostId" : "localhost" }, "roleState" : "STOPPED", "serviceRef" : { "serviceName" : "my_hbase", "clusterName" : "Cluster 1 - CDH4" } } ]

Set Configuration

This sets the service dependency and HDFS root directory for our newly created HBase Service. The API returns the set of custom configuration. $ curl -X PUT -H "Content-Type:application/json" -u admin:admin \ -d '{ "items": [ { "name": "hdfs_rootdir", "value": "/my_hbase" }, { "name": "zookeeper_service", "value": "zookeeper1" }, { "name": "hdfs_service", "value": "hdfs1" } ] }' \ 'http://localhost:7180/api/v1/clusters/Cluster%201%20-%20CDH4/services/my_hbase/config' { "roleTypeConfigs" : [ { "roleType" : "MASTER", "items" : [ ] }, { "roleType" : "REGIONSERVER", "items" : [ ] }, { "roleType" : "GATEWAY", "items" : [ ] } ], "items" : [ { "name" : "hdfs_service", "value" : "hdfs1" }, { "name" : "hdfs_rootdir", "value" : "/my_hbase" }, { "name" : "zookeeper_service", "value" : "zookeeper1" } ] }

Issue Commands

After setting the root directory, we need to create it in HDFS. There is an HBase service level command for that. As with all API command calls, the issued command runs asynchronously. The API returns the command object, which may still be active. $ curl -X POST -u admin:admin \ 'http://localhost:7180/api/v1/clusters/Cluster%201%20-%20CDH4/services/my_hbase/commands/hbaseCreateRoot' { "id" : 142, "name" : "CreateRootDir", "startTime" : "2012-05-06T20:56:57.918Z", "active" : true, "serviceRef" : { "serviceName" : "my_hbase", "clusterName" : "Cluster 1 - CDH4" } }

We can check on the command's status, at the /commands endpoint, to see whether it has finished. $ curl -u admin:admin 'http://localhost:7180/api/v1/commands/142' { "id" : 142, "name" : "CreateRootDir", "startTime" : "2012-05-06T20:56:57.918Z", "endTime" : "2012-05-06T20:57:27.172Z", "active" : false, "success" : true, "resultMessage" : "Successfully created HBase root directory.", "serviceRef" : { "serviceName" : "my_hbase", "clusterName" : "Cluster 1 - CDH4" }, "children" : { "items" : [ { "id" : 141, "name" : "CreateDir", "startTime" : "2012-05-06T20:56:58.190Z", "endTime" : "2012-05-06T20:57:27.171Z", "active" : false, "success" : true, "resultMessage" : "Sucessfully created directory.", "roleRef" : { "roleName" : "hdfs1-NAMENODE-1", "serviceName" : "hdfs1", "clusterName" : "Cluster 1 - CDH4" } } ] } }

We now start the new HBase service. $ curl -X POST -u admin:admin 'http://localhost:7180/api/v1/clusters/Cluster%201%20-%20CDH4/services/my_hbase/commands/start' { "id" : 145, "name" : "Start", "startTime" : "2012-05-06T21:00:22.326Z", "active" : true, "serviceRef" : { "serviceName" : "my_hbase", "clusterName" : "Cluster 1 - CDH4" } }

Again, we poll to check the command's result. $ curl -u admin:admin 'http://localhost:7180/api/v1/commands/145' { "id" : 145, "name" : "Start", "startTime" : "2012-05-06T21:00:22.326Z", "endTime" : "2012-05-06T21:00:54.079Z", "active" : false, "success" : true, "resultMessage" : "Service started successfully.", "serviceRef" : { "serviceName" : "my_hbase", "clusterName" : "Cluster 1 - CDH4" }, "children" : { "items" : [ { "id" : 144, "name" : "Start", "startTime" : "2012-05-06T21:00:22.375Z", "endTime" : "2012-05-06T21:00:48.737Z", "active" : false, "success" : true, "resultMessage" : "Supervisor returned RUNNING", "roleRef" : { "roleName" : "master1", "serviceName" : "my_hbase", "clusterName" : "Cluster 1 - CDH4" } }, { "id" : 143, "name" : "Start", "startTime" : "2012-05-06T21:00:22.356Z", "endTime" : "2012-05-06T21:00:54.075Z", "active" : false, "success" : true, "resultMessage" : "Supervisor returned RUNNING", "roleRef" : { "roleName" : "rs1", "serviceName" : "my_hbase", "clusterName" : "Cluster 1 - CDH4" } } ] } }

Querying metric data

Getting dfs capacity metric data for service HDFS-1. $ curl -u admin:admin \ 'http://localhost:7180/api/v11/timeseries?query=select%20dfs_capacity,%20dfs_capacity_used,%20dfs_capacity_used_non_hdfs%20where%20entityName=HDFS-1' { "items" : [ { "timeSeries": [ { "metadata": { "metricName": "dfs_capacity", "entityName": "HDFS-1", "startTime": "2015-09-17T23:42:22.533Z", "endTime": "2015-09-17T23:47:22.533Z", "attributes": { "clusterName": "Cluster 1", "category": "SERVICE", "clusterDisplayName": "Cluster 1", "active": "true", "serviceType": "HDFS", "serviceDisplayName": "HDFS-1", "version": "CDH 5.7.0", "serviceName": "HDFS-1", "entityName": "HDFS-1" }, "unitNumerators": [ "bytes" ], "unitDenominators": [], "expression": "SELECT dfs_capacity WHERE entityName = \"HDFS-1\" AND category = SERVICE", "metricCollectionFrequencyMs": 60000, "rollupUsed": "RAW" }, "data": [ { "timestamp": "2015-09-17T23:43:10.599Z", "value": 86909397813, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:44:10.605Z", "value": 86909397813, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:45:10.608Z", "value": 86909397813, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:46:10.615Z", "value": 86909397813, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:47:15.613Z", "value": 86909397813, "type": "SAMPLE" } ] }, { "metadata": { "metricName": "dfs_capacity_used", "entityName": "HDFS-1", "startTime": "2015-09-17T23:42:22.533Z", "endTime": "2015-09-17T23:47:22.533Z", "attributes": { "clusterName": "Cluster 1", "category": "SERVICE", "clusterDisplayName": "Cluster 1", "active": "true", "serviceType": "HDFS", "serviceDisplayName": "HDFS-1", "version": "CDH 5.7.0", "serviceName": "HDFS-1", "entityName": "HDFS-1" }, "unitNumerators": [ "bytes" ], "unitDenominators": [], "expression": "SELECT dfs_capacity_used WHERE entityName = \"HDFS-1\" AND category = SERVICE", "metricCollectionFrequencyMs": 60000, "rollupUsed": "RAW" }, "data": [ { "timestamp": "2015-09-17T23:43:10.599Z", "value": 1728884736, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:44:10.605Z", "value": 1728884736, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:45:10.608Z", "value": 1728884736, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:46:10.615Z", "value": 1728884736, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:47:15.613Z", "value": 1728884736, "type": "SAMPLE" } ] }, { "metadata": { "metricName": "dfs_capacity_used_non_hdfs", "entityName": "HDFS-1", "startTime": "2015-09-17T23:42:22.533Z", "endTime": "2015-09-17T23:47:22.533Z", "attributes": { "clusterName": "Cluster 1", "category": "SERVICE", "clusterDisplayName": "Cluster 1", "active": "true", "serviceType": "HDFS", "serviceDisplayName": "HDFS-1", "version": "CDH 5.7.0", "serviceName": "HDFS-1", "entityName": "HDFS-1" }, "unitNumerators": [ "bytes" ], "unitDenominators": [], "expression": "SELECT dfs_capacity_used_non_hdfs WHERE entityName = \"HDFS-1\" AND category = SERVICE", "metricCollectionFrequencyMs": 60000, "rollupUsed": "RAW" }, "data": [ { "timestamp": "2015-09-17T23:43:10.599Z", "value": 1610609973, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:44:10.605Z", "value": 1610609973, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:45:10.608Z", "value": 1610609973, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:46:10.615Z", "value": 1610609973, "type": "SAMPLE" }, { "timestamp": "2015-09-17T23:47:15.613Z", "value": 1610609973, "type": "SAMPLE" } ] } ], "warnings": [], "timeSeriesQuery": "select dfs_capacity, dfs_capacity_used, dfs_capacity_used_non_hdfs where entityName=HDFS-1" } ]