Skip to content

Commit 5fbe8ff

Browse files
committed
remove nested admonitions
1 parent 3953d83 commit 5fbe8ff

File tree

3 files changed

+20
-35
lines changed

3 files changed

+20
-35
lines changed

source/includes/note-trigger-method.rst

-4
This file was deleted.

source/streaming-mode/streaming-read-config.txt

+14-22
Original file line numberDiff line numberDiff line change
@@ -82,12 +82,10 @@ You can configure the following properties when reading data from MongoDB in str
8282

8383
[{"$match": {"closed": false}}, {"$project": {"status": 1, "name": 1, "description": 1}}]
8484

85-
.. important::
86-
87-
Custom aggregation pipelines must be compatible with the
88-
partitioner strategy. For example, aggregation stages such as
89-
``$group`` do not work with any partitioner that creates more than
90-
one partition.
85+
Custom aggregation pipelines must be compatible with the
86+
partitioner strategy. For example, aggregation stages such as
87+
``$group`` do not work with any partitioner that creates more than
88+
one partition.
9189

9290
* - ``aggregation.allowDiskUse``
9391
- | Specifies whether to allow storage to disk when running the
@@ -135,14 +133,12 @@ You can configure the following properties when reading a change stream from Mon
135133
original document and updated document, but it also includes a copy of the
136134
entire updated document.
137135

136+
For more information on how this change stream option works,
137+
see the MongoDB server manual guide
138+
:manual:`Lookup Full Document for Update Operation </changeStreams/#lookup-full-document-for-update-operations>`.
139+
138140
**Default:** "default"
139141

140-
.. tip::
141-
142-
For more information on how this change stream option works,
143-
see the MongoDB server manual guide
144-
:manual:`Lookup Full Document for Update Operation </changeStreams/#lookup-full-document-for-update-operations>`.
145-
146142
* - ``change.stream.micro.batch.max.partition.count``
147143
- | The maximum number of partitions the {+connector-short+} divides each
148144
micro-batch into. Spark workers can process these partitions in parallel.
@@ -151,11 +147,9 @@ You can configure the following properties when reading a change stream from Mon
151147
|
152148
| **Default**: ``1``
153149

154-
.. warning:: Event Order
155-
156-
Specifying a value larger than ``1`` can alter the order in which
157-
the {+connector-short+} processes change events. Avoid this setting
158-
if out-of-order processing could create data inconsistencies downstream.
150+
:red:`WARNING:` Specifying a value larger than ``1`` can alter the order in which
151+
the {+connector-short+} processes change events. Avoid this setting
152+
if out-of-order processing could create data inconsistencies downstream.
159153

160154
* - ``change.stream.publish.full.document.only``
161155
- | Specifies whether to publish the changed document or the full
@@ -174,12 +168,10 @@ You can configure the following properties when reading a change stream from Mon
174168
- If you don't specify a schema, the connector infers the schema
175169
from the change stream document.
176170

177-
**Default**: ``false``
171+
This setting overrides the ``change.stream.lookup.full.document``
172+
setting.
178173

179-
.. note::
180-
181-
This setting overrides the ``change.stream.lookup.full.document``
182-
setting.
174+
**Default**: ``false``
183175

184176
* - ``change.stream.startup.mode``
185177
- | Specifies how the connector starts up when no offset is available.

source/streaming-mode/streaming-write.txt

+6-9
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,8 @@ Write to MongoDB in Streaming Mode
5151

5252
* - ``writeStream.trigger()``
5353
- Specifies how often the {+connector-short+} writes results
54-
to the streaming sink.
54+
to the streaming sink. Call this method on the ``DataStreamWriter`` object
55+
you create from the ``DataStreamReader`` you configure.
5556

5657
To use continuous processing, pass ``Trigger.Continuous(<time value>)``
5758
as an argument, where ``<time value>`` is how often you want the Spark
@@ -62,8 +63,6 @@ Write to MongoDB in Streaming Mode
6263

6364
To view a list of all supported processing policies, see the `Java
6465
trigger documentation <https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/streaming/Trigger.html>`__.
65-
66-
.. include:: /includes/note-trigger-method
6766

6867
The following code snippet shows how to use the previous
6968
configuration settings to stream data to MongoDB:
@@ -119,7 +118,8 @@ Write to MongoDB in Streaming Mode
119118

120119
* - ``writeStream.trigger()``
121120
- Specifies how often the {+connector-short+} writes results
122-
to the streaming sink.
121+
to the streaming sink. Call this method on the ``DataStreamWriter`` object
122+
you create from the ``DataStreamReader`` you configure.
123123

124124
To use continuous processing, pass the function a time value
125125
using the ``continuous`` parameter.
@@ -130,8 +130,6 @@ Write to MongoDB in Streaming Mode
130130
To view a list of all supported processing policies, see
131131
the `pyspark trigger documentation <https://spark.apache.org/docs/latest/api/python/reference/pyspark.ss/api/pyspark.sql.streaming.DataStreamWriter.trigger.html>`__.
132132

133-
.. include:: /includes/note-trigger-method
134-
135133
The following code snippet shows how to use the previous
136134
configuration settings to stream data to MongoDB:
137135

@@ -186,7 +184,8 @@ Write to MongoDB in Streaming Mode
186184

187185
* - ``writeStream.trigger()``
188186
- Specifies how often the {+connector-short+} writes results
189-
to the streaming sink.
187+
to the streaming sink. Call this method on the ``DataStreamWriter`` object
188+
you create from the ``DataStreamReader`` you configure.
190189

191190
To use continuous processing, pass ``Trigger.Continuous(<time value>)``
192191
as an argument, where ``<time value>`` is how often you want the Spark
@@ -198,8 +197,6 @@ Write to MongoDB in Streaming Mode
198197
To view a list of all
199198
supported processing policies, see the `Scala trigger documentation <https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/streaming/DataStreamWriter.html#trigger(trigger:org.apache.spark.sql.streaming.Trigger):org.apache.spark.sql.streaming.DataStreamWriter[T]>`__.
200199

201-
.. include:: /includes/note-trigger-method
202-
203200
The following code snippet shows how to use the previous
204201
configuration settings to stream data to MongoDB:
205202

0 commit comments

Comments
 (0)