Replies: 7 comments 7 replies
-
Thank you for initiating this discussion, @lukemt! There are two modes to consider:
I look forward to exploring more possibilities through this fantastic plugin! |
Beta Was this translation helpful? Give feedback.
-
Thank you both for the great ideas! My thoughts below:
I agree, I've also seen many people use ChatGPT MD to enhance their existing notes while having to add minimal changes to their workflow. The way I imagine it, the
This is a good idea, but again, it adds boilerplate people have to think about before "chatting" ergo friction. I prefer opt in over opt out for friction elements (frontmatter required elements for example), and opt out over opt in for convenience elements (auto infer title feature for example)
You can use the
This is a good idea, but it breaks the logic
I'm considering adding an optional comment block that will be ignored by ChatGPT MD entirely for an upcoming release. Something like
Hmm, I get how this could be useful, but again it would cause confusion/make the regex extraction really complex. Even if the plugin would say for example see if there is only one divider element and throw out the rest of the file, it would be a case that could break existing chats, or would have to be toggled on and off in settings a ton. In sum, the main goal is this: "opt in over opt out for friction elements (frontmatter required elements for example), and opt out over opt in for convenience elements (auto infer title feature for example)" so anything that abides by that I'm much more likely to approve! |
Beta Was this translation helpful? Give feedback.
-
Thanks @bramses for the prompt follow-up! Generally I love your philosophy for the design and will practice more in my workflow.
Just want to get some clarification from you - it seems in such scenarios the model is hardcoded to GPT-3.5 and the default front-matter takes no effect - is it so? If so, that's the reason for my feature request #41. |
Beta Was this translation helpful? Give feedback.
-
I added a PR (#47) that allows the "prompt mode" inside a regular note. It will not include anything above the divider - only text beneath it. If no divider is found in a regular note, the call will not happen. If you are in a dedicated chat file, the calls always run, and the entire file serves as context. |
Beta Was this translation helpful? Give feedback.
-
Hi @everyone! Thank you so much for these amazing and detailed responses! I have an idea, how about this: Let's imagine we had 3 types of dividers:
If the Now, if you call the chat command from within a note, it would check whether any of those dividers are present.
Now you would enter your prompt and when you call Benefits:
|
Beta Was this translation helpful? Give feedback.
-
hey all, please check out the comment block feature i just added! still working towards a solution here about prompt mode vs topical mode, but hopefully this is a step in the right direction: #50 |
Beta Was this translation helpful? Give feedback.
-
Interestingly there is another way to exclude at least header content from sending to OpenAI. Consider this template: ---
...frontmatter...
---
This text will **not** be included in the chat conversation
role::user
Hi
If you call the |
Beta Was this translation helpful? Give feedback.
-
I see many people use chatgpt-md within existing notes and I am wondering if this is an intended use case and what’s actually going on if you do that. Doesn’t chatgpt-md send the entire content of the existing note to ChatGPT as a message and if so are people aware that this happens?
I believe most users that do this would prefer a chatgpt-md section, that is „independent“ of the rest of the note.
What do you guys think about this?
Beta Was this translation helpful? Give feedback.
All reactions