1.4. Releasing Apache HBase

HBase 0.96.x will run on hadoop 1.x or hadoop 2.x but building, you must choose which to build against; we cannot make a single HBase binary to run against both hadoop1 and hadoop2. Since we include the Hadoop we were built against -- so we can do standalone mode -- the set of modules included in the tarball changes dependent on whether the hadoop1 or hadoop2 target chosen. You can tell which HBase you have -- whether it is for hadoop1 or hadoop2 by looking at the version; the HBase for hadoop1 will include 'hadoop1' in its version. Ditto for hadoop2.

Maven, our build system, natively will not let you have a single product built against different dependencies. Its understandable. But neither could we convince maven to change the set of included modules and write out the correct poms w/ appropriate dependencies even though we have two build targets; one for hadoop1 and another for hadoop2. So, there is a prestep required. This prestep takes as input the current pom.xmls and it generates hadoop1 or hadoop2 versions. You then reference these generated poms when you build. Read on for examples

Publishing to maven requires you sign the artifacts you want to upload. To have the build do this for you, you need to make sure you have a properly configured settings.xml in your local repository under .m2. Here is my ~/.m2/settings.xml.

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
                      http://maven.apache.org/xsd/settings-1.0.0.xsd">
  <servers>
    <!- To publish a snapshot of some part of Maven -->
    <server>
      <id>apache.snapshots.https</id>
      <username>YOUR_APACHE_ID
      </username>
      <password>YOUR_APACHE_PASSWORD
      </password>
    </server>
    <!-- To publish a website using Maven -->
    <!-- To stage a release of some part of Maven -->
    <server>
      <id>apache.releases.https</id>
      <username>YOUR_APACHE_ID
      </username>
      <password>YOUR_APACHE_PASSWORD
      </password>
    </server>
  </servers>
  <profiles>
    <profile>
      <id>apache-release</id>
      <properties>
    <gpg.keyname>YOUR_KEYNAME</gpg.keyname>
    <!--Keyname is something like this ... 00A5F21E... do gpg --list-keys to find it-->
    <gpg.passphrase>YOUR_KEY_PASSWORD
    </gpg.passphrase>
      </properties>
    </profile>
  </profiles>
</settings>
        

You must use maven 3.0.x (Check by running mvn -version).

1.4.1. Making a Release Candidate

I'll explain by running through the process. See later in this section for more detail on particular steps. The script dev-support/make_rc.sh automates most of this.

The Hadoop How To Release wiki page informs much of the below and may have more detail on particular sections so it is worth review.

Update CHANGES.txt with the changes since the last release (query JIRA, export to excel then hack w/ vim to format to suit CHANGES.txt TODO: Needs detail). Adjust the version in all the poms appropriately; e.g. you may need to remove -SNAPSHOT from all versions. The Versions Maven Plugin can be of use here. To set a version in all poms, do something like this:

$ mvn clean org.codehaus.mojo:versions-maven-plugin:1.3.1:set -DnewVersion=0.96.0

Checkin the CHANGES.txt and version changes.

Now, build the src tarball. This tarball is hadoop version independent. It is just the pure src code and documentation without an hadoop1 or hadoop2 taint. Add the -Prelease profile when building; it checks files for licenses and will fail the build if unlicensed files present.

$ MAVEN_OPTS="-Xmx2g" mvn clean install -DskipTests assembly:single -Dassembly.file=hbase-assembly/src/main/assembly/src.xml -Prelease

Undo the tarball and make sure it looks good (A good test is seeing if you can build from the undone tarball). Save it off to a version directory, i.e a directory somewhere where you are collecting all of the tarballs you will publish as part of the release candidate. For example if we were building a hbase-0.96.0 release candidate, we might call the directory hbase-0.96.0RC0. Later we will publish this directory as our release candidate up on people.apache.org/~you.

Now we are into the making of the hadoop1 and hadoop2 specific builds. Lets do hadoop1 first. First generate the hadoop1 poms. See the generate-hadoopX-poms.sh script usage for what it expects by way of arguments. You will find it in the dev-support subdirectory. In the below, we generate hadoop1 poms with a version of 0.96.0-hadoop1 (the script will look for a version of 0.96.0 in the current pom.xml).

$ ./dev-support/generate-hadoopX-poms.sh 0.96.0 0.96.0-hadoop1

The script will work silently if all goes well. It will drop a pom.xml.hadoop1 beside all pom.xmls in all modules.

Now build the hadoop1 tarball. Note how we reference the new pom.xml.hadoop1 explicitly. We also add the -Prelease profile when building; it checks files for licenses and will fail the build if unlicensed files present. Do it in two steps. First install into the local repository and then generate documentation and assemble the tarball (Otherwise build complains that hbase modules are not in maven repo when we try to do it all in the one go especially on fresh repo). It seems that you need the install goal in both steps.

$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop1 clean install -DskipTests -Prelease
$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop1 install -DskipTests site assembly:single -Prelease

Undo the generated tarball and check it out. Look at doc. and see if it runs, etc. Are the set of modules appropriate: e.g. do we have a hbase-hadoop2-compat in the hadoop1 tarball? If good, copy the tarball to your version directory.

I'll tag the release at this point since its looking good. If we find an issue later, we can delete the tag and start over. Release needs to be tagged when we do next step.

Now deploy hadoop1 hbase to mvn. Do the mvn deploy and tgz for a particular version all together in the one go else if you flip between hadoop1 and hadoop2 builds, you might mal-publish poms and hbase-default.xml's (the version interpolations won't match). This time we use the apache-release profile instead of just release profile when doing mvn deploy; it will invoke the apache pom referenced by our poms. It will also sign your artifacts published to mvn as long as your settings.xml in your local .m2 repository is configured correctly (your settings.xml adds your gpg password property to the apache profile).

$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop1 deploy -DskipTests -Papache-release

The last command above copies all artifacts for hadoop1 up to mvn repo. If no -SNAPSHOT in the version, it puts the artifacts into a staging directory. This is what you want.

hbase-downstreamer

See the hbase-downstreamer test for a simple example of a project that is downstream of hbase an depends on it. Check it out and run its simple test to make sure maven hbase-hadoop1 and hbase-hadoop2 are properly deployed to the maven repository.

Lets do the hadoop2 artifacts (read above hadoop1 section closely before coming here because we don't repeat explaination in the below).

# Generate the hadoop2 poms.
$ ./dev-support/generate-hadoopX-poms.sh 0.96.0 0.96.0-hadoop2
# Install the hbase hadoop2 jars into local repo then build the doc and tarball
$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 clean install -DskipTests -Prelease
$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 install -DskipTests site assembly:single -Prelease
# Undo the tgz and check it out.  If good, copy the tarball to your 'version directory'. Now deploy to mvn.
$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 deploy -DskipTests -Papache-release
            

At this stage we have three tarballs in our 'version directory' and two sets of artifacts up in maven in staging area. First lets put the version directory up on people.apache.org. You will need to sign and fingerprint them before you push them up. In the version directory do this:

$ for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done
$ for i in *.tar.gz; do echo $i; gpg --armor --output $i.asc --detach-sig $i  ; done
$ cd ..
# Presuming our 'version directory' is named 0.96.0RC0, now copy it up to people.apache.org.
$ rsync -av 0.96.0RC0 people.apache.org:public_html
        

For the maven artifacts, login at repository.apache.org. Find your artifacts in the staging directory. Close the artifacts. This will give you an URL for the temporary mvn staging repository. Do the closing for hadoop1 and hadoop2 repos. See Publishing Maven Artifacts for some pointers.

Note

We no longer publish using the maven release plugin. Instead we do mvn deploy. It seems to give us a backdoor to maven release publishing. If no -SNAPSHOT on the version string, then we are 'deployed' to the apache maven repository staging directory from which we can publish URLs for candidates and later, if they pass, publish as release (if a -SNAPSHOT on the version string, deploy will put the artifacts up into apache snapshot repos).

Make sure the people.apache.org directory is showing -- it can take a while to show -- and that the mvn repo urls are good. Announce the release candidate on the mailing list and call a vote.

A strange issue I ran into was the one where the upload into the apache repository was being sprayed across multiple apache machines making it so I could not release. See INFRA-4482 Why is my upload to mvn spread across multiple repositories?.

1.4.2. Publishing a SNAPSHOT to maven

Make sure your settings.xml is set up properly (see above for how). Make sure the hbase version includes -SNAPSHOT as a suffix. Here is how I published SNAPSHOTS of a checked that had an hbase version of 0.96.0 in its poms. First we generated the hadoop1 poms with a version that has a -SNAPSHOT suffix. We then installed the build into the local repository. Then we deploy this build to apache. See the output for the location up in apache to where the snapshot is copied. Notice how add the release profile when install locally -- to find files that are without proper license -- and then the apache-release profile to deploy to the apache maven repository.

$ ./dev-support/generate-hadoopX-poms.sh 0.96.0 0.96.0-hadoop1-SNAPSHOT
 $ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop1 clean install -DskipTests  javadoc:aggregate site assembly:single -Prelease
 $ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop1 -DskipTests  deploy -Papache-release

Next, do the same to publish the hadoop2 artifacts.

$ ./dev-support/generate-hadoopX-poms.sh 0.96.0 0.96.0-hadoop2-SNAPSHOT
$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 clean install -DskipTests  javadoc:aggregate site assembly:single -Prelease
$ MAVEN_OPTS="-Xmx3g" mvn -f pom.xml.hadoop2 deploy -DskipTests -Papache-release

comments powered by Disqus