-
-
Notifications
You must be signed in to change notification settings - Fork 224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Publishable Changes vs Brain #5459
Comments
another alternative solution we thought about briefly was to move complexity to the core: One of the core problems is the hard to use api of $cr->handle(PublishNodesFromWorkspace::create(
workspace: 'user-ws',
startingNodeAggregateId: 'document-node-id',
nodeTypeScope: 'Neos.Neos:Document' // stops when encountered as child
)) or $cr->handle(PublishNodesFromWorkspace::create(
workspace: 'user-ws',
selection: [
NodeToPublish::create('document-node-id'),
NodeToPublishRecursively::create('document-main-content-collection-node-id'),
// ... resolve all other children that are not of type "Neos.Neos:Document" from outside.
]
)) Both of the expressed ways would require the core to select the events to publish based on the required hierarchy. Now the information can be fetched from the graph in the happy case by finding all descendant node aggregate ids and publishing the events that target those. For (3.b and 3.c) the content graph currently does not provide information for deleted nodes, but we can work around this the following ways: soft removal of nodes in the graph projectionWe should be able to internally keep removed nodes as soft removal in the graph directly or track them in a separate table. That way a purely internal api Christian and me made following points when discussing the option of soft deletion of nodes:
gather nodes to publish during simulation-> poc implementation: #5461 The cruelest case for a reliable publishing is (3.c). To be able to gather events that target nodes that are eventually deleted we have to ask the graph when it still holds this information. Before the node was removed we have that information. So we could decide when handling the remove command to gather information about all its children that are about to be removed (slow because of cte) or even visit all previously made events and check if they are in the scope of the deletion. This information could then be attached to But instead of introducing the complexity there and bloating up the events i had the insight that we can do the same during the simulation in $commandSimulator->run(
static function ($handle, $contentGraph) use ($commandSimulator, $rebaseableCommands, $startingNodeAggregateId): SequenceNumber {
$remainingCommands = [];
foreach ($rebaseableCommands as $rebaseableCommand) {
// extract into PublishingScopeCriteria and cache `findClosestNode` if no moves happened in the meantime
if ($contentGraph->findClosestNode($rebaseableCommand->getAffectedNodeAggregateId(), FindClosestNodeFilter::create('Neos.Neos:Document'))?->aggregateId?->equals($startingNodeAggregateId)) {
$handle($rebaseableCommand);
continue;
}
$remainingCommands[] = $rebaseableCommand;
}
foreach ($remainingCommands as $remainingCommand) {
$handle($remainingCommand);
}
}
); Each |
After (re)-discovering this bug: #4997 (comment)
i think its time again to look at the change projection with an closer eye.
requirements for the tracked changes
now the current requirements for the change projection should be the following:
a. when publishing a document changes on the document (like the title) should be published as well as all nested content within. Other child documents or tethered documents are not scope of this publish.
b. get node ids of nodes that were deleted to publish deletions of documents by using publish all (in site) and publish document to remove content elements
c. detect that changed nodes where the parent has been deleted also have to be included in the publish operation BUG: Publishing individual nodes is impossible when contents were created on a deleted document #4997
a. also show information for deleted nodes
a. track which dimensions an aggregate scope change affected
b. show differences of properties and subtree tags
additional optimisations to ignore redundant changes because of a followed deletion (probably doesnt have to be optimised as its a rare case):
current implementation
The change projection tracks changes
And the information like their closest parent document is evaluated via the publishing service while the content diffing is done during evaluation against the base workspace in the workspace review controller.
the requirements can almost be covered by the current implementation with following exceptions / downsides:
removalAttachmentPoint
#4487proposed solution
catchup hook
all the requirements without facing the problems of the current implementation should be solve-able by implementation the change tracking as catchup hook. That allows to get the actual hierarchy at the point of the change which later can be used to easily determine the count (2.) and prepare a working change set for publishing (3.) especially (3.c)
the slowness of the current evaluation would be pre-evaluated when making that change and when rebasing it. That means when rebasing or publishing with huge remaining changes - or publishing into a workspace the expensive
findClosestNode
queries are still run during that publishing time span and dont improve the time the button press takes. Only gathering the information when booting the Neos Ui is faster.alternative solutions
keeping it a projection
it would be interesting to have the ability to write dependant projections to access the content graph. That would allow to access the hierarchy and being a little bit simpler to replay and reset. Though this is not possible currently.
Also tracking the whole hierarchy is not a good idea as this leaves us with a huge implementation. Au contraire to the uri path projection which also tracks hierarchy, we need to track every workspace and also every node if changed or not. This is an immense overload and not desirable or maintainable.
The text was updated successfully, but these errors were encountered: