About 54,700 results
Open links in new tab
  1. Documentation - Apache Spark

    Apache Spark™ Documentation Setup instructions, programming guides, and other documentation are available for each stable version of Spark below: Spark Spark 4.1.0

  2. PySpark Overview — PySpark 4.1.0 documentation - Apache Spark

    Dec 11, 2025 · Spark Connect is a client-server architecture within Apache Spark that enables remote connectivity to Spark clusters from any application. PySpark provides the client for the Spark …

  3. Structured Streaming Programming Guide - Spark 4.1.0 Documentation

    In a version of spark that supports changelog checkpointing, you can migrate streaming queries from older versions of Spark to changelog checkpointing by enabling changelog checkpointing in the …

  4. Spark Release 4.0.0 - Apache Spark

    Apache Spark 4.0.0 marks a significant milestone as the inaugural release in the 4.x series, embodying the collective effort of the vibrant open-source community.

  5. Performance Tuning - Spark 4.1.0 Documentation

    Apache Spark’s ability to choose the best execution plan among many possible options is determined in part by its estimates of how many rows will be output by every node in the execution plan (read, filter, …

  6. Spark 4.0.0 released - Apache Spark

    Spark 4.0.0 released We are happy to announce the availability of Spark 4.0.0! Visit the release notes to read about the new features, or download the release today. Spark News Archive

  7. Overview - Spark 3.5.6 Documentation

    Scala and Java users can include Spark in their projects using its Maven coordinates and Python users can install Spark from PyPI. If you’d like to build Spark from source, visit Building Spark.

  8. Building Spark - Spark 4.0.0 Documentation

    Spark now comes packaged with a self-contained Maven installation to ease building and deployment of Spark from source located under the build/ directory. This script will automatically download and …

  9. Structured Streaming Programming Guide - Spark 4.1.0 Documentation

    You will express your streaming computation as standard batch-like query as on a static table, and Spark runs it as an incremental query on the unbounded input table.

  10. Quickstart: DataFrame — PySpark 4.1.0 documentation - Apache Spark

    DataFrame and Spark SQL share the same execution engine so they can be interchangeably used seamlessly. For example, you can register the DataFrame as a table and run a SQL easily as below: