Apache Spark

Apache Spark

Apache Spark is an open source cluster computing framework. Originally developed at the University of California, Berkeley's AMPLab, the Spark codebase was later donated to the Apache Software Foundation that has maintained it since. Spark provides an interface for programming entire clusters with implicit data parallelism and fault-tolerance.

Apache Spark provides programmers with an application programming interface centered on a data structure called the resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way. It was developed in response to limitations in the MapReduce cluster computing paradigm, which forces a particular linear dataflow structure on distributed programs: MapReduce programs read input data from disk, map a function across the data, reduce the results of the map, and store reduction results on disk. Spark's RDDs function as a working set for distributed programs that offers a (deliberately) restricted form of distributed shared memory.

The availability of RDDs facilitates the implementation of both iterative algorithms, that visit their dataset multiple times in a loop, and interactive/exploratory data analysis, i.e., the repeated database-style querying of data. The latency of such applications (compared to Apache Hadoop, a popular MapReduce implementation) may be reduced by several orders of magnitude. Among the class of iterative algorithms are the training algorithms for machine learning systems, which formed the initial impetus for developing Apache Spark.

Apache Spark requires a cluster manager and a distributed storage system. For cluster management, Spark supports standalone (native Spark cluster), Hadoop YARN, or Apache Mesos.[5] For distributed storage, Spark can interface with a wide variety, including Hadoop Distributed File System (HDFS), MapR File System (MapR-FS), Cassandra, OpenStack Swift, Amazon S3, Kudu, or a custom solution can be implemented. Spark also supports a pseudo-distributed local mode, usually used only for development or testing purposes, where distributed storage is not required and the local file system can be used instead; in such a scenario, Spark is run on a single machine with one executor per CPU core.

Apache® Spark™ is a powerful open source processing engine built around speed, ease of use, and sophisticated analytics

Features of Spark

Speed: Run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk

Ease of Use: Write applications quickly in Java, Scala, Python, R.

Generality: Combine SQL, streaming, and complex analytics.

Runs Everywhere: Spark runs on Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Cassandra, HBase, and S3. 


Apache Spark
added 8 years 5 months ago

Contents related to 'Apache Spark'

Apache Hadoop: Apache Hadoop is an open-source software framework for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware.

- Scikit-learn
- Bower
- Apache Spark
- Apache Chukwa
- Scaldi
- Redis
- CruiseControl and CruiseControl.Net (CCNet)
- Financial Information Exchange (FIX) API
- Portable Operating System Interface (POSIX)
- Reguler Expression (RegEx)
- Metaprogramming
- Windows Communication Foundation (WCF)
- CXF
- MINA, NIO
- WebLogic
- Wildcards
- Message-oriented middleware (MOM)
- Team Foundation Server (TFS)
- CppUnit
- Google Protocol Buffer (ProtocolBuf)
5
4
3
2
1