Skip to content

Commit 587d8fb

Browse files
committed
add note about agents
Signed-off-by: Nathaniel <[email protected]>
1 parent ece0de2 commit 587d8fb

File tree

2 files changed

+14
-10
lines changed

2 files changed

+14
-10
lines changed

examples/case_studies/bayesian_sem_workflow.ipynb

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5968,9 +5968,9 @@
59685968
"source": [
59695969
"## SEM with Discrete Choice Component\n",
59705970
"\n",
5971-
"Combining SEM structures with Discrete choice models involves adding an extra likelihood term dependent on the latent factors. HR managers everywhere need to monitor attrition decisions. Often, they conceptualise the rationale for these decisions as being driven by abstract notions of job satisfaction. We now have tools to measure the latent constructs, but can we predict attrition outcomes from these latent predictors? \n",
5971+
"Combining SEM structures with Discrete choice models involves adding an extra likelihood term dependent on the latent factors. HR managers, for instance, often monitor attrition decisions and conceptualize them as driven by abstract notions such as job satisfaction. We now have tools to measure these latent constructsbut can we predict attrition outcomes from them?\n",
59725972
"\n",
5973-
"Let's include a discrete choice scenario into the SEM model context. We're aiming to predict a categorical decision about whether the employee `quits/stays/quiet-quits` as the result of their job satisfaction, and their view of the utility of work. Again, we'll see this up as a parameter recovery exercise. \n",
5973+
"To explore this, we’ll embed a discrete choice scenario into the SEM framework. The goal is to predict a categorical outcome — whether an employee stays, quits, or quiet-quitsas a function of their job satisfaction and their perceived utility of work. Once again, well frame this as a parameter recovery exercise.\n",
59745974
"\n",
59755975
"![](dcm_sem.png)\n"
59765976
]
@@ -5980,7 +5980,7 @@
59805980
"id": "306e09b8",
59815981
"metadata": {},
59825982
"source": [
5983-
"The discrete choice setting is intuitive in this context because we can model the individual's subjective utility of work as a function of their job satisfaction. This utility measure is conceptualised (in rational-choice theory) to determine the choice outcome."
5983+
"The discrete choice setting is intuitive in this context because we can model the individual's subjective utility of work as a function of their job satisfaction. Within rational-choice theory, this utility determines the decision outcome."
59845984
]
59855985
},
59865986
{
@@ -7030,9 +7030,11 @@
70307030
"source": [
70317031
"We can recover the inverse relationship we encoded in the outcomes between job-satisfaction and the choice to stay. Similarly, this posterior predictive checks look sound. This is encouraging. \n",
70327032
"\n",
7033-
"The \"action\" in human decision making is often understood to be driven by these hard-to-quantify constructs that determine motivation. SEM with a discrete choice component offers us a way to model these processes, while allowing for measurement error between the observables and the latent drivers of choice. Secondly, we are triangulating the values of the system between two sources of observable data. On the one hand, we measure latent constructs in the SEM with a range of survey measures (`JW1`, `JW2`, ... ) but then calibrate the consequences of that measurement against revealed choice data. This is a powerful technique for abstracting over the expressed attitudes of rational agents, and deriving an interpretable representation of the latent attitude in their expressions. These representations are then further calibrated against the observed choices made by the agents. \n",
7033+
"The \"action\" in human decision making is often understood to be driven by these hard-to-quantify constructs that determine motivation. SEM with a discrete choice component offers us a way to model these processes, while allowing for measurement error between the observables and the latent drivers of choice. Secondly, we are triangulating the values of the system between two sources of observable data. On the one hand, we measure latent constructs in the SEM with a range of survey measures (`JW1`, `JW2`, ... ) but then calibrate the consequences of that measurement against revealed choice data. This dual calibration offers a powerful approach to understanding how latent dispositions translate into observable actions.\n",
70347034
"\n",
7035-
"This two-step of information compression and prediction serves to concisely quantify and evaluate the idiosyncratic attitudes of a population of complex agents. As we iteratively layer-in these constructs in our model development, we come to understand their baseline and interactive effects. This perspective helps us gauge the coherence between attitudes and actions of the agents under study. "
7035+
"We abstract over the _expressed attitudes_ of rational agents, and deriving an interpretable representation of the latent attitude in their expressions. These representations are then further calibrated against the _observed choices_ made by the agents. This two-step of information compression and prediction serves to concisely quantify and evaluate the idiosyncratic attitudes of a population of complex agents. As we iteratively layer-in these constructs in our model development, we come to understand their baseline and interactive effects. This perspective helps us gauge the coherence between attitudes and actions of the agents under study. \n",
7036+
"\n",
7037+
"The same workflow extends seamlessly to computational agents, where latent variables represent more opaque internal states or reward expectations. In both human and artificial systems, discrete choice modelling provides a common language for interpreting how such latent structure generates behavioral choices."
70367038
]
70377039
},
70387040
{

examples/case_studies/bayesian_sem_workflow.myst.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1291,15 +1291,15 @@ Another way we might interrogate the implications of a model is to see how well
12911291

12921292
## SEM with Discrete Choice Component
12931293

1294-
Combining SEM structures with Discrete choice models involves adding an extra likelihood term dependent on the latent factors. HR managers everywhere need to monitor attrition decisions. Often, they conceptualise the rationale for these decisions as being driven by abstract notions of job satisfaction. We now have tools to measure the latent constructs, but can we predict attrition outcomes from these latent predictors?
1294+
Combining SEM structures with Discrete choice models involves adding an extra likelihood term dependent on the latent factors. HR managers, for instance, often monitor attrition decisions and conceptualize them as driven by abstract notions such as job satisfaction. We now have tools to measure these latent constructsbut can we predict attrition outcomes from them?
12951295

1296-
Let's include a discrete choice scenario into the SEM model context. We're aiming to predict a categorical decision about whether the employee `quits/stays/quiet-quits` as the result of their job satisfaction, and their view of the utility of work. Again, we'll see this up as a parameter recovery exercise.
1296+
To explore this, we’ll embed a discrete choice scenario into the SEM framework. The goal is to predict a categorical outcome — whether an employee stays, quits, or quiet-quitsas a function of their job satisfaction and their perceived utility of work. Once again, well frame this as a parameter recovery exercise.
12971297

12981298
![](dcm_sem.png)
12991299

13001300
+++
13011301

1302-
The discrete choice setting is intuitive in this context because we can model the individual's subjective utility of work as a function of their job satisfaction. This utility measure is conceptualised (in rational-choice theory) to determine the choice outcome.
1302+
The discrete choice setting is intuitive in this context because we can model the individual's subjective utility of work as a function of their job satisfaction. Within rational-choice theory, this utility determines the decision outcome.
13031303

13041304
```{code-cell} ipython3
13051305
observed_data_discrete = make_sample(cov_matrix, 250, FEATURE_COLUMNS)
@@ -1548,9 +1548,11 @@ axs[1].legend();
15481548

15491549
We can recover the inverse relationship we encoded in the outcomes between job-satisfaction and the choice to stay. Similarly, this posterior predictive checks look sound. This is encouraging.
15501550

1551-
The "action" in human decision making is often understood to be driven by these hard-to-quantify constructs that determine motivation. SEM with a discrete choice component offers us a way to model these processes, while allowing for measurement error between the observables and the latent drivers of choice. Secondly, we are triangulating the values of the system between two sources of observable data. On the one hand, we measure latent constructs in the SEM with a range of survey measures (`JW1`, `JW2`, ... ) but then calibrate the consequences of that measurement against revealed choice data. This is a powerful technique for abstracting over the expressed attitudes of rational agents, and deriving an interpretable representation of the latent attitude in their expressions. These representations are then further calibrated against the observed choices made by the agents.
1551+
The "action" in human decision making is often understood to be driven by these hard-to-quantify constructs that determine motivation. SEM with a discrete choice component offers us a way to model these processes, while allowing for measurement error between the observables and the latent drivers of choice. Secondly, we are triangulating the values of the system between two sources of observable data. On the one hand, we measure latent constructs in the SEM with a range of survey measures (`JW1`, `JW2`, ... ) but then calibrate the consequences of that measurement against revealed choice data. This dual calibration offers a powerful approach to understanding how latent dispositions translate into observable actions.
15521552

1553-
This two-step of information compression and prediction serves to concisely quantify and evaluate the idiosyncratic attitudes of a population of complex agents. As we iteratively layer-in these constructs in our model development, we come to understand their baseline and interactive effects. This perspective helps us gauge the coherence between attitudes and actions of the agents under study.
1553+
We abstract over the _expressed attitudes_ of rational agents, and deriving an interpretable representation of the latent attitude in their expressions. These representations are then further calibrated against the _observed choices_ made by the agents. This two-step of information compression and prediction serves to concisely quantify and evaluate the idiosyncratic attitudes of a population of complex agents. As we iteratively layer-in these constructs in our model development, we come to understand their baseline and interactive effects. This perspective helps us gauge the coherence between attitudes and actions of the agents under study.
1554+
1555+
The same workflow extends seamlessly to computational agents, where latent variables represent more opaque internal states or reward expectations. In both human and artificial systems, discrete choice modelling provides a common language for interpreting how such latent structure generates behavioral choices.
15541556

15551557
+++
15561558

0 commit comments

Comments
 (0)