Data Eng Weekly


Hadoop Weekly Issue #114

29 March 2015

It's no longer a surprise when Spark is a big topic in an issue of Hadoop Weekly, but there are four great posts this week covering optimizing Spark programs, new features in Spark 1.3, and a case-study from Edmunds.com. Other topics covered include Docker in YARN, kerberos-enabled Hadoop, and Kafka. Also be sure to check out the releases, including a new golang implementation of Avro from LinkedIn.

Technical

This tutorial describes how to build a kerberos-enabled Hadoop cluster inside of a VM (the steps are valuable outside of a VM, too). The author provides a script for setting up kerberos before running the quickstart wizard that comes with Cloudera Manager. The script, which includes thorough comments, makes kerberos much less intimidating.

http://blog.cloudera.com/blog/2015/03/how-to-quickly-configure-kerberos-for-your-apache-hadoop-cluster/

This post provides a brief introduction to the DockerContainerExecutor that was introduced in YARN as part of Apache Hadoop 2.6. It describes one of the main motivations for running inside of docker containers—managing system-level dependencies.

https://www.altiscale.com/hadoop-blog/dockercontainerexecutor/

The following slides and video are from a presentation given at the recent Strata San Jose conference on optimizing Spark programs. Topics covered include understanding shuffle in Spark (and common problems), understanding which code runs on the client vs. the workers, and tips for organizing code for reusability and testability.

http://www.slideshare.net/databricks/strata-sj-everyday-im-shuffling-tips-for-writing-better-spark-programs
https://www.youtube.com/watch?v=Wg2boMqLjCg&feature=youtu.be

As noted in the Apache Spark 1.3 release, Spark SQL is no-longer alpha. This post explains that this guarantee means binary compatibility across Spark 1.x. It also describes some plans for improving Spark SQL (better integration with Hive), the new data sources API, improvements to Parquet support (automatic partition discovery and schema migration), and support for JDBC sources.

https://databricks.com/blog/2015/03/24/spark-sql-graduates-from-alpha-in-spark-1-3.html

The Cloudera blog has a post from a software engineer working at Edmunds.com on how they built a spark-streaming based analytics dashboard to monitor traffic related to superbowl ads. The system also uses Flume, HBase, Solr, Morphlines, and Banana (a port of kibana to Solr) as well as algebird's implementation of HyperLogLog. The post is a good end-to-end description of how the system was built and how it works (with screenshots).

http://blog.cloudera.com/blog/2015/03/how-edmunds-com-used-spark-streaming-to-build-a-near-real-time-dashboard/

For those looking to scale machine learning implementations, the Databricks blog has a post on Spark 1.3's implementation of Latent Dirichlet Allocation (LDA). The post describes LDA, common use-cases, and how it's implemented atop of GraphX (the Graph API for Spark).

https://databricks.com/blog/2015/03/25/topic-modeling-with-lda-mllib-meets-graphx.html

This post describes how to enable support for impersonation from Hue in HBase so that users can only view/modify data which they're allowed to via HBase permissions. It also describes how to configure the HBase Thrift Server for kerberos authentication. There are screen shots of the Hue-HBase application, and several troubleshooting steps for common configuration issues.

http://gethue.com/hbase-browsing-with-doas-impersonation-and-kerberos/

As a developer, it can become easy to get used to peculiarities of a system you're working with. It's good to take a step back and understand these issues (or even decide if they really are issues!). In this case, the ingest.tips blog has a post that gathers feedback on "what is confusing about Kafka?" In addition to collecting the feedback, there are responses/links for several of the issues.

http://ingest.tips/2015/03/26/what-is-confusing-about-kafka/

The Hortonworks blog has the third part in a series on anomaly detection in healthcare data. In this post, they use SociaLite, an open-source graph analysis framework to compute a variant of PageRank. The post gives an overview of SociaLite (which integrates with Python) and describes the implementation to find anomalies. All code is available on github.

http://hortonworks.com/blog/using-pagerank-to-detect-anomalies-and-fraud-in-healthcare-part3/

Most folks working with batch systems start out with a simple workflow system that spawns one job after another via cron. From their, they often move to a job that runs based on the availability of input data. As a post on the Cask blog explains, it's difficult to implement a data-driven workflow efficiently. Most systems poll for the availability of input, which can be slow. The Cask Data Application Platform (CDAP) uses notifications to trigger jobs. The follow post describes the architecture in greater detail.

http://blog.cask.co/2015/03/data-driven-job-scheduling-in-hadoop/

News

This post seeks to help understand the role of stream processing in the big data ecosystem. The author interviews several folks in industry, including Hadoop creator Doug Cutting, trying to answer the question "will streaming completely replace batch?" Reactions are mixed, but everyone seems to agree that stream processing tools for big data are getting better.

http://www.infoworld.com/article/2900504/big-data/beyond-hadoop-streaming-future-of-big-data.html

The agenda for HBaseCon, which takes place May 7th in San Francisco, has been posted. The conference has four tracks—Operations, Development and Internals, Ecosystem, and Use Cases.

http://hbasecon.com/agenda/

O'Reilly has a new video training, "Introduction to Apache Kafka" by Gwen Shapira. The training is just under three hours and is aimed at Developers and Administrators.

http://shop.oreilly.com/product/0636920038603.do

Releases

Cloudera announced a maintenance release of Apache Accumulo for CDH 5 to fix the POODLE vulnerability.

http://community.cloudera.com/t5/Release-Announcements/Announcing-Apache-Accumulo-on-CDH-5-Maintenance-Release/m-p/25752#U25752

Version 1.1.2 of Luigi, the workflow management tool, was recently released. The new version includes improved support for Spark.

https://github.com/spotify/luigi/releases/tag/1.1.2

The SDK for Google's Cloud Dataflow (similar to many DSLs like Scalding and Spark) is open source. The main "runner" implementation uses the Google Cloud Platform, but there's also implementation for Apache Spark. This week, the Apache Flink project announced a runner, which allows any pipeline written for Cloud Dataflow to run on a Flink cluster.

http://googlecloudplatform.blogspot.com/2015/03/announcing-Google-Cloud-Dataflow-runner-for-Apache-Flink.html

MicroStrategy announced that Apache Drill is certified with the MicroStrategy Analytics Enterprise Platform. The MapR blog has a brief introduction of how to configure the integration.

https://www.mapr.com/blog/microstrategy-analytics-apache-drill-and-you

EMC has announced the Federation Business Data Lake, which combines several pieces of software with hardware. The software includes Pivotal HD (with mention of the Open Data Platform) and hardware includes EMC Isilon.   http://blog.pivotal.io/big-data-pivotal/news-2/new-federation-business-data-lake-should-be-your-silver-bullet-for-big-data-success

Cloudera Director 1.1.1 was released this week. Cloudera Director is a tool for provisioning and managing Hadoop clusters in AWS. This release includes several bug fixes and documentation updates.

http://community.cloudera.com/t5/Release-Announcements/Announcing-Cloudera-Director-1-1-1/m-p/25927#U25927

Cask has announced version 2.8.0 of the Cask Data Application Platform (CDAP). The new version adds namespaces, fork/join for the workflow system, a new metrics layer, and more.

http://blog.cask.co/2015/03/cdap-v2-8-0-is-out-in-the-wild/

Sematext, makes of the SPM Performance Monitoring system, have announced that SPM now supports monitoring, alerting, anomaly detection for Apache HBase 0.98. The tool monitors a number of metrics including cache, replication, the WAL, and much more (290 metrics in total).

http://blog.sematext.com/2015/03/24/hbase-0-98-monitoring-support/

LinkedIn has open-sourced a golang library for Apache Avro. The library, called Goavro, supports decoding and encoding of data according to version 1.7.7 of the Avro specification. More details (including a few limitations) are described on the github site.

https://github.com/linkedin/goavro

Version 0.9.6 of RDMA for Apache Hadoop was released this week.  The package is a derivative of Apache Hadoop that allows a cluster to use remote direct memory access (RDMA) interconnects to improve performance. It supports a Lustre and a hybrid file system where data is stored both in memory and on disk.

http://hibd.cse.ohio-state.edu/features/#hadoop2

Events

Curated by Datadog ( http://www.datadoghq.com )

UNITED STATES

California

Building an Enterprise Company in a Consumer World, by Mike Olson of Cloudera (Palo Alto) - Wednesday, April 1
http://www.meetup.com/SF-Bay-Areas-Big-Data-Think-Tank/events/221090019/

Getting Started with Spark & Cassandra, by Jon Haddad of Datastax (Culver City) - Thursday, April 2
http://www.meetup.com/Los-Angeles-Apache-Spark-Users-Group/events/220467881/

Arizona

Analyzing Real-World Data with Apache Drill and Hadoop (Tempe) - Wednesday, April 1
http://www.meetup.com/Phoenix-Hadoop-User-Group/events/220024531/

Minnesota

A Taste of Scala (Saint Paul) - Thursday, April 2
http://www.meetup.com/Twin-Cities-Hadoop-User-Group/events/221102533/

Illinois

Hadoop + Spark (Northbrook) - Wednesday, April 1
http://www.meetup.com/The-Data-Scientist_Chicago/events/220800019/

Hadoop Data Hub, New Approaches to Data Management and Discovery (Chicago) - Thursday, April 2
http://www.meetup.com/Chicago-Booth-Big-Data-Analytics-Round-Table/events/221015169/

Wisconsin

Hadoop POC: Lessons Learned at American Family Insurance (Madison) - Tuesday, March 31
http://www.meetup.com/BigDataMadison/events/219135061/

Michigan

Machine Learning with Big Data Using Apache Spark (Okemos) - Tuesday, March 31
http://www.meetup.com/Lansing-Hadoop-Users-Group-Meetup/events/220898415/

Pennsylvania

Apache Phoenix for HBase & Hadoop (Philadelphia) - Tuesday, March 31
http://www.meetup.com/Philadelphia-Hadoop-User-Group/events/191610532/

CANADA

Hands-on: Scalable Big Graph Data Processing in Spark (Vancouver) - Tuesday, March 31
http://www.meetup.com/Vancouver-Spark/events/220007388/

UNITED KINGDOM

Deep Dive into Apache Cassandra, with an Intro to Apache Spark Integration (Manchester) - Wednesday, April 1
http://www.meetup.com/HadoopManchester/events/219960258/

GERMANY

The State of Flink and the Road Ahead (Berlin) - Tuesday, March 31
http://www.meetup.com/Apache-Flink-Meetup/events/221037302/

JAPAN

Bluemix Hadoop (Tokyo) - Tuesday, March 31
http://www.meetup.com/Big-Data-Developers-in-Tokyo/events/221362504/

If you didn't receive this email directly, and you'd like to subscribe to weekly emails please visit http://hadoopweekly.com