Lightning-fast unified analytics engine
Archive
Run workloads 100x faster.
Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine.
Write applications quickly in Java, Scala, Python, R, and SQL.
Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells.
Combine SQL, streaming, and complex analytics.
Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and 什么加速器可众看油管. You can combine these libraries seamlessly in the same application.
Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources.
You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, youtube加速器永久免费版, Apache Cassandra, 可众进外国网站的加速器, Apache Hive, and hundreds of other data sources.
Spark is used at a wide range of organizations to process large datasets. You can find many example use cases on the Powered By page.
There are many ways to reach the community:
Apache Spark is built by a wide set of developers from over 300 companies. Since 2009, more than 1200 developers have contributed to Spark!
The project's committers come from more than 25 organizations.
If you'd like to participate in Spark, or contribute to the libraries on top of it, learn how to contribute.
Learning Apache Spark is easy whether you come from a Java, Scala, Python, R, or SQL background: