-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DOCSP-31186 Add sections for connections #243
DOCSP-31186 Add sections for connections #243
Conversation
✅ Deploy Preview for docs-spark-connector ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice work! just a few small changes.
source/getting-started.txt
Outdated
Integrations | ||
------------ | ||
|
||
You can integrate Spark with third-party platforms to use the {+connector-long+} in various external platforms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i: repetitive use of 'platforms'
You can integrate Spark with third-party platforms to use the {+connector-long+} in various external platforms. | |
The following sections describe some of the popular third-party platforms that you can | |
integrate Spark and the {+connector-long+} with. |
source/getting-started.txt
Outdated
Amazon EMR | ||
~~~~~~~~~~ | ||
|
||
Amazon EMR is a managed cluster platform that you can run big data frameworks such as Spark on. To install Spark on an EMR cluster, see |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i: such as
Amazon EMR is a managed cluster platform that you can run big data frameworks such as Spark on. To install Spark on an EMR cluster, see | |
Amazon EMR is a managed cluster platform that you can use to run big data frameworks like Spark. To install Spark on an EMR cluster, see |
source/getting-started.txt
Outdated
Docker is an open-source platform that helps developers build, share, and run applications in containers. The following steps guide you through the process of connecting to a Docker container | ||
and integrating the {+connector-long+} in Docker. | ||
|
||
1. To start Spark in a Docker container, see `Apache Spark <https://hub.docker.com/r/apache/spark#!>`__ in the Docker documentation and follow the steps provided. | ||
#. See `Create a Local Atlas Deployment with Docker <https://www.mongodb.com/docs/atlas/cli/current/atlas-cli-deploy-docker/>`__ to deploy Atlas on Docker. | ||
#. Select the appropriate language in the tabs under :ref:`Getting Started <pyspark-shell>` and follow the steps provided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i: 'The following steps' suggest this is going to be a procedure, which feels out of place with the rest of these sections. I would probably just break it into separate links (and the last step is hopefully already understood, so I would leave it out):
Docker is an open-source platform that helps developers build, share, and run applications in containers. The following steps guide you through the process of connecting to a Docker container | |
and integrating the {+connector-long+} in Docker. | |
1. To start Spark in a Docker container, see `Apache Spark <https://hub.docker.com/r/apache/spark#!>`__ in the Docker documentation and follow the steps provided. | |
#. See `Create a Local Atlas Deployment with Docker <https://www.mongodb.com/docs/atlas/cli/current/atlas-cli-deploy-docker/>`__ to deploy Atlas on Docker. | |
#. Select the appropriate language in the tabs under :ref:`Getting Started <pyspark-shell>` and follow the steps provided. | |
Docker is an open-source platform that helps developers build, share, and run applications in containers. | |
- To learn how to start Spark in a Docker container, see `Apache Spark <https://hub.docker.com/r/apache/spark#!>`__ in the Docker documentation. | |
- To learn how to deploy Atlas on Docker, see `Create a Local Atlas Deployment with Docker <https://www.mongodb.com/docs/atlas/cli/current/atlas-cli-deploy-docker/>`__. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a great suggestion! I agree steps felt odd but I wasn't sure how to break out the instructions without having a long paragraph. The bullet points are a good idea 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 SG fix, then LGTM!
source/getting-started.txt
Outdated
#. See `Create a Local Atlas Deployment with Docker <https://www.mongodb.com/docs/atlas/cli/current/atlas-cli-deploy-docker/>`__ to deploy Atlas on Docker. | ||
#. Select the appropriate language in the tabs under :ref:`Getting Started <pyspark-shell>` and follow the steps provided. | ||
- To start Spark in a Docker container, see `Apache Spark <https://hub.docker.com/r/apache/spark#!>`__ in the Docker documentation and follow the steps provided. | ||
- See `Create a Local Atlas Deployment with Docker <https://www.mongodb.com/docs/atlas/cli/current/atlas-cli-deploy-docker/>`__ to deploy Atlas on Docker. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SG: 'Link text should begin with the purpose of the cross-reference'
- See `Create a Local Atlas Deployment with Docker <https://www.mongodb.com/docs/atlas/cli/current/atlas-cli-deploy-docker/>`__ to deploy Atlas on Docker. | |
- To learn how to deploy Atlas on Docker, see `Create a Local Atlas Deployment with Docker <https://www.mongodb.com/docs/atlas/cli/current/atlas-cli-deploy-docker/>`__. |
* add sections for connections * fix vale * fix vale flag * fix ref * fix typos * change steps formatting * change based on mikes feedback * fix vale error * change based on feedback (cherry picked from commit 75df8c4)
* add sections for connections * fix vale * fix vale flag * fix ref * fix typos * change steps formatting * change based on mikes feedback * fix vale error * change based on feedback (cherry picked from commit 75df8c4)
💔 Some backports could not be created
Note: Successful backport PRs will be merged automatically after passing CI. Manual backportTo create the backport manually run:
Questions ?Please refer to the Backport tool documentation |
* add sections for connections * fix vale * fix vale flag * fix ref * fix typos * change steps formatting * change based on mikes feedback * fix vale error * change based on feedback (cherry picked from commit 75df8c4)
* DOCSP-29861: Cleanup unused files (#200) (cherry picked from commit 05f3125) * DOCSP-40130 - Note on Sharded Partitioner (#201) Co-authored-by: Nora Reidy <[email protected]> (cherry picked from commit e25e13f) * Add Netlify config files via upload * DOCSP-42969 - remove nested admonitions (#204) (#208) (cherry picked from commit b37226e) Co-authored-by: Mike Woofter <[email protected]> * DOCSP-44953 TOC Relabel (#214) (#217) * DOCSP-44953 TOC Relabel * edit * Mike's Suggestions * keep configure tls (cherry picked from commit 987747d) Co-authored-by: lindseymoore <[email protected]> * (DOCSP-45749) Denests last phase 1 nested component for spark connector (#220) (#222) * Add dependency installation to vale rule (cherry picked from commit 989f378) * [Spark] Remove autobuilder (#227) (#230) (cherry picked from commit ca7f722) Co-authored-by: Rachel Mackintosh <[email protected]> * DOCSP-46719 Spark Guide 404 (#235) (#239) (cherry picked from commit 671e355) Co-authored-by: lindseymoore <[email protected]> * DOCSP-31186 Add sections for connections (#243) (#248) * add sections for connections * fix vale * fix vale flag * fix ref * fix typos * change steps formatting * change based on mikes feedback * fix vale error * change based on feedback (cherry picked from commit 75df8c4) --------- Co-authored-by: Michael Morisi <[email protected]> Co-authored-by: Mike Woofter <[email protected]> Co-authored-by: anabellabuckvar <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: lindseymoore <[email protected]> Co-authored-by: Sarah Simpers <[email protected]> Co-authored-by: Rea Rustagi <[email protected]> Co-authored-by: Rachel Mackintosh <[email protected]>
Pull Request Info
PR Reviewing Guidelines
JIRA - https://jira.mongodb.org/browse/DOCSP-31186
Staging Links
Self-Review Checklist