Replies: 2 comments
-
|
You only have 37M rows in the largest table. Partitioning the data will not help you with performance in this scenario. You are better off trying to optimize the DAX and be careful with using the SWITCH function as it is prone to perf issues. If you want to see the tables & partitions in your model, you would follow a pattern like this: from sempy_labs.tom import connect_semantic_model
dataset = ''
workspace = ''
with connect_semantic_model(dataset=dataset, workspace=workspace, readonly=True) as tom:
for t in tom.model.Tables:
for p in t.Partitions:
print(f"{t.Name} : {p.Name}") |
Beta Was this translation helpful? Give feedback.
-
|
@JJDE did you manage to solve your performance issues using either partitioning or other semantic link based tool? We are facing the same issues and seem to have almost identical setups, (multiple fact tables and compute heavy SWITCH - based measures), would be very interesting to learn from your findings! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Situation:

Have a Fabric Workspace with SemanticModel1 which is loaded from a AzureSQL DB.
All Tables in Model1 are in Import-Mode, there are 3 relative big Fact-Tables:
I have massive performance issues with measures based on this model, the measures use SWITCH statements heavily.
My hope is that partitioning the 3 fact-tables by "Scenario_Key" would boost performance
Challenge:
I want to create a Notebook which uses semantik-link-labs to:
I checked the lab docu and find following operations as potential candidates for the solution:
tom.all_partitions()
tom.add_m_partition()
tom.update_m_partition()
rom..refresh_semantic_model(partition)
tom.add_entity_partition?!
Unfortunately i'm a absolut beginner in python and the labs library, I'm not even able to read the existing partitions using All_partitions(), let alone create the required partitions in a loop. Does anyone have any tips for getting started or perhaps even a script block that could serve as an example for managing M-Partitions? Please no advise to switch to DirectLake...there are constraints blocking me to change the solution archicture currently.. My try so far:

Beta Was this translation helpful? Give feedback.
All reactions