diff --git a/Coordination.md b/Coordination.md new file mode 100644 index 0000000000..29f1a671ec --- /dev/null +++ b/Coordination.md @@ -0,0 +1,66 @@ +# [PHASE 3 REQUIREMENTS] +(https://docs.google.com/document/d/1oaXD2gjbQTMcSbYllbsGI17IqQbSJP5T0lSpxT6BRAs/edit?tab=t.0) <----- Link to the Req. + +DUE FRIDAY AT 5PM. + +- [others_should_add_what_they_think_is_needed] Make sure readme is complete + - Done by Fabrizio, add in any other necessary things that I might have missed onto the README.md + +- [ ] Be sure to link video presentation + + +--- + + +### SQL +- [x] Add in more mock data via mockaroo or similar. +- [x] Add in bridge tables (which imo is great news -- we wont have to build a front end for the a few features) + Done by Fabrizio, organized the mock data and added some for the bridge tables. Rearanged entire SQL database +- optional: [x] Change attribute types where possible to enumberation (is this helpful?) + - I (Fabrizio) could do this as it would make the attribute make more sense, but not sure if its useful now? + + +--- + + +### REST API +- [ ] Implement REST API to python (imo im not sure we need as many rows as we have -- which is also good news) + +PLEASE LABEL WHERE YOUR PUT/POST/DELETE ROUTES ARE + +- [ ] Jose Landing Page + - [x] Jose Feature 1 + - 3.4 Completed by Ryan + - [x] Jose Feature 2 + - 3.1 Completed by Ryan :D + - [x] Jose Feature 3 + - 3.6 Completed by ryan **PUT** + +- [ ] Jack Landing Page + - [x] Jack Feature 1 + - Added subgoals Completed by Ryan + - [X] Jack Feature 2 + -Added graphs done by Jaden + - [ ] Jack Feature 3 + +- [ ] Alan Landing Page + - [x] Alan Feature 1 **POST** + - 2.3 done by ry + - [ ] Alan Feature 2 + - [ ] Alan Feature 3 + +- [ ] Avery Landing Page + - [x] Avery Feature 1 **PUT** + - 1.1 Done by ryan + - [x] Avery Feature 2 + - 1.2 done by ryan **DELETE** + - [x] Avery Feature 3 + - done by someone **POST** + + +TOTAL ROUTES: +users - 6 (1 post, 1 delete) +support - 6 (1 put, 1 delete) +tags - 5 (1 post, 1 delete, 1 put) +goals - 2 +daily tasks - 5 \ No newline at end of file diff --git a/README.md b/README.md index 57559df051..0e9f53d867 100644 --- a/README.md +++ b/README.md @@ -1,108 +1,143 @@ -# Summer 2 2025 CS 3200 Project Template +# Summer 2 2025 CS 3200 Project - What is Goal Planner (Global GoalFlow)? -This is a template repo CS 3200 Summer 2 2025 Course Project. +Goal Planner is a comprehensive goal and habit management platform that transforms how people approach long-term achievement by making data work for them, not against them. Unlike traditional to-do apps that leave users drowning in endless lists, Goal Planner intelligently breaks down ambitious projects into manageable phases, automatically suggests next tasks when you complete something, and seamlessly integrates daily habits with major milestones. -It includes most of the infrastructure setup (containers), sample databases, and example UI pages. Explore it fully and ask questions! +By collecting and analyzing user progress patterns, deadline adherence, and completion rates, our app provides personalized insights that help users understand their productivity patterns and optimize their approach to goal achievement. + +We're building this for four distinct user types: individual achievers like freelancers and students who juggle multiple projects, professionals and researchers who need structured approaches to complex work, business analysts who require data-driven insights into team performance and goal completion rates, and system administrators who need robust, scalable platforms for managing user communities. + +This repo includes the infrastrucure setup, a MySQL database along with mock data, and example UI pages. + +### Project Members + +- Ryan Baylon +- Hyeyeon Seo +- Jaden Hu +- Rishik Kellar +- Fabrizio Flores + +--- ## Prerequisites -- A GitHub Account -- A terminal-based git client or GUI Git client such as GitHub Desktop or the Git plugin for VSCode. -- VSCode with the Python Plugin installed -- A distribution of Python running on your laptop. The distribution supported by the course is Anaconda or Miniconda. - - Create a new Python 3.11 environment in conda named `db-proj` by running: - ```bash - conda create -n db-proj python=3.11 - ``` - - Install the Python dependencies listed in `api/requirements.txt` and `app/src/requirements.txt` into your local Python environment. You can do this by running `pip install -r requirements.txt` in each respective directory. +Before starting, make sure you have: + +- A GitHub account +- Git client (terminal or GUI such as GitHub Desktop or Git plugin for VSCode) +- VSCode with the Python Plugin or your preferred IDE +- Docker and Docker Compose installed on your machine + +--- + +## Repo Structure +The repo is organized into five main directories: + +- `./app` – Frontend Streamlit app for user interaction. +- `./api` – Backend REST API (Flask) to handle business logic and database communication. +- `./database-files` – SQL scripts to initialize and seed the MySQL database with mock data. +- `./datasets` – Folder for datasets (if needed). +- `docker-compose.yaml` – Configuration to start the app, API, and MySQL database containers. + +--- + +## Database Setup + +We use a MySQL database named `global-GoalFlow`. The schema includes tables to manage users, goals, tasks, posts, tags, bug reports, and more, supporting the core functionality of Goal Planner. -## Structure of the Repo +### Key Tables Overview -- The repo is organized into five main directories: - - `./app` - the Streamlit app - - `./api` - the Flask REST API - - `./database-files` - SQL scripts to initialize the MySQL database - - `./datasets` - folder for storing datasets +- **users**: Stores user profiles, roles, contact info, and management relationships. +- **tags**: Categories for goals, posts, and tasks. +- **posts** & **post_reply**: Community forum posts and replies. +- **user_data**: Tracks user activity, devices, and login info. +- **bug_reports**: For tracking issues submitted by users. +- **consistent_tasks**, **daily_tasks**: Task management for recurring and daily items. +- **goals** & **subgoals**: Hierarchical goal tracking with status, priority, and deadlines. -- The repo also contains a `docker-compose.yaml` file that is used to set up the Docker containers for the front end app, the REST API, and MySQL database. +The database schema is designed to support role-based access, data integrity, and efficient queries with proper indexes and foreign keys. -## Suggestion for Learning the Project Code Base +--- -If you are not familiar with web app development, this code base might be confusing. But don't worry, we'll get through it together. Here are some suggestions for learning the code base: +## How to Build and Run -1. Have two versions of the template repo - one for you to individually explore and learn and another for your team's project implementation. -1. Start by exploring the `./app` directory. This is where the Streamlit app is located. The Streamlit app is a Python-based web app that is used to interact with the user. It's a great way to build a simple web app without having to learn a lot of web development. -1. Next, explore the `./api` directory. This is where the Flask REST API is located. The REST API is used to interact with the database and perform other server-side tasks. You might also consider this the "application logic" or "business logic" layer of your app. -1. Finally, explore the `./database-files` directory. This is where the SQL scripts are located that will be used to initialize the MySQL database. +### 1. Clone the Repository -### Setting Up Your Personal Testing Repo +```bash +git clone +cd +``` -**Before you start**: You need to have a GitHub account and a terminal-based git client or GUI Git client such as GitHub Desktop or the Git plugin for VSCode. +### 2. Set up Environment Variables +Copy the **.env.template** file inside the **api** folder and rename it to **.env**. Edit the .env file to include your database credentials and secrets. Make sure passwords are secure and unique. -1. Clone this repo to your local machine. - 1. You can do this by clicking the green "Code" button on the top right of the repo page and copying the URL. Then, in your terminal, run `git clone `. - 1. Or, you can use the GitHub Desktop app to clone the repo. See [this page](https://docs.github.com/en/desktop/adding-and-cloning-repositories/cloning-a-repository-from-github-to-github-desktop) of the GitHub Desktop Docs for more info. -1. Open the repository folder in VSCode. -1. Set up the `.env` file in the `api` folder based on the `.env.template` file. - 1. Make a copy of the `.env.template` file and name it `.env`. - 1. Open the new `.env` file. - 1. On the last line, delete the `<...>` placeholder text, and put a password. Don't reuse any passwords you use for any other services (email, etc.) -1. For running the testing containers (for your personal repo), you will tell `docker compose` to use a different configuration file than the typical one. The one you will use for testing is `sandbox.yaml`. - 1. `docker compose -f sandbox.yaml up -d` to start all the containers in the background - 1. `docker compose -f sandbox.yaml down` to shutdown and delete the containers - 1. `docker compose -f sandbox.yaml up db -d` only start the database container (replace db with api or app for the other two services as needed) - 1. `docker compose -f sandbox.yaml stop` to "turn off" the containers but not delete them. +### 3. Start Docker Containers +Use Docker Compose to start the full stack: -### Setting Up Your Team's Repo +```bash +docker compose up -d +``` +This will start: + - MySQL database container + - Flask REST API backend + - Streamlit frontend app -**Before you start**: As a team, one person needs to assume the role of _Team Project Repo Owner_. +To stop and remove containers: +```bash +docker compose down +``` -1. The Team Project Repo Owner needs to **fork** this template repo into their own GitHub account **and give the repo a name consistent with your project's name**. If you're worried that the repo is public, don't. Every team is doing a different project. -1. In the newly forked team repo, the Team Project Repo Owner should go to the **Settings** tab, choose **Collaborators and Teams** on the left-side panel. Add each of your team members to the repository with Write access. +### 4. Initialize the Database +Run the SQL scripts inside ./database-files to create tables and insert initial data: -**Remaining Team Members** +```bash +mysql -u -p < ./database-files/schema.sql +``` +Or connect to the running MySQL container and execute the scripts. -1. Each of the other team members will receive an invitation to join. -1. Once you have accepted the invitation, you should clone the Team's Project Repo to your local machine. -1. Set up the `.env` file in the `api` folder based on the `.env.template` file. -1. For running the testing containers (for your team's repo): - 1. `docker compose up -d` to start all the containers in the background - 1. `docker compose down` to shutdown and delete the containers - 1. `docker compose up db -d` only start the database container (replace db with api or app for the other two services as needed) - 1. `docker compose stop` to "turn off" the containers but not delete them. +--- -**Note:** You can also use the Docker Desktop GUI to start and stop the containers after the first initial run. +## User Personas & Stories +Persona 1: Avery - Freelance Designer + - Juggles client and personal projects. + - Needs task automation and habit tracking to stay consistent. + - Wants a visual dashboard for progress and deadlines. + - Requires space for creative ideas and manageable workflows. -## Handling User Role Access and Control +Persona 2: Dr. Alan - Professor + - Math professor balancing research and teaching. + - Needs categorized projects, priority control, and deadline management. + - Wants completed projects archived but accessible for reference. -In most applications, when a user logs in, they assume a particular role. For instance, when one logs in to a stock price prediction app, they may be a single investor, a portfolio manager, or a corporate executive (of a publicly traded company). Each of those _roles_ will likely present some similar features as well as some different features when compared to the other roles. So, how do you accomplish this in Streamlit? This is sometimes called Role-based Access Control, or **RBAC** for short. +Persona 3: Jose – System Administrator + - Oversees app scalability, user support, and community engagement. + - Requires bug tracking dashboard, user analytics, and payment plan insights. -The code in this project demonstrates how to implement a simple RBAC system in Streamlit but without actually using user authentication (usernames and passwords). The Streamlit pages from the original template repo are split up among 3 roles - Political Strategist, USAID Worker, and a System Administrator role (this is used for any sort of system tasks such as re-training ML model, etc.). It also demonstrates how to deploy an ML model. +Persona 4: Jack – Financial Analyst + - Tracks company goals and employee task completion. + - Needs subgoal checkboxes, deadlines, and aggregated progress reports. -Wrapping your head around this will take a little time and exploration of this code base. Some highlights are below. +--- -### Getting Started with the RBAC +### Features + - Automatic project phase generation prevents overwhelming long-term goals + - Intelligent task queuing surfaces next actionable items automatically + - Comprehensive analytics dashboards provide insights into productivity patterns + - Role-based access control supports users with distinct permissions and views + - Community forum for user discussions, bug reports, and feedback + - Task and goal hierarchy with tags, priorities, and scheduling -1. We need to turn off the standard panel of links on the left side of the Streamlit app. This is done through the `app/src/.streamlit/config.toml` file. So check that out. We are turning it off so we can control directly what links are shown. -1. Then I created a new python module in `app/src/modules/nav.py`. When you look at the file, you will se that there are functions for basically each page of the application. The `st.sidebar.page_link(...)` adds a single link to the sidebar. We have a separate function for each page so that we can organize the links/pages by role. -1. Next, check out the `app/src/Home.py` file. Notice that there are 3 buttons added to the page and when one is clicked, it redirects via `st.switch_page(...)` to that Roles Home page in `app/src/pages`. But before the redirect, I set a few different variables in the Streamlit `session_state` object to track role, first name of the user, and that the user is now authenticated. -1. Notice near the top of `app/src/Home.py` and all other pages, there is a call to `SideBarLinks(...)` from the `app/src/nav.py` module. This is the function that will use the role set in `session_state` to determine what links to show the user in the sidebar. -1. The pages are organized by Role. Pages that start with a `0` are related to the _Political Strategist_ role. Pages that start with a `1` are related to the _USAID worker_ role. And, pages that start with a `2` are related to The _System Administrator_ role. +--- +## Notes on User Roles and Access Control +Our platform implements a simple Role-Based Access Control (RBAC) system, differentiating between: + - Individual users (freelancers, researchers) + - Business analysts and managers + - System administrators -## Incorporating ML Models into your Project (Optional for CS 3200) +Each role experiences a customized view with access to features relevant to their needs and permissions. -_Note_: This project only contains the infrastructure for a hypothetical ML model. +--- -1. Collect and preprocess necessary datasets for your ML models. -1. Build, train, and test your ML model in a Jupyter Notebook. - - You can store your datasets in the `datasets` folder. You can also store your Jupyter Notebook in the `ml-src` folder. -1. Once your team is happy with the model's performance, convert your Jupyter Notebook code for the ML model to a pure Python script. - - You can include the `training` and `testing` functionality as well as the `prediction` functionality. - - Develop and test this pure Python script first in the `ml-src` folder. - - You may or may not need to include data cleaning, though. -1. Review the `api/backend/ml_models` module. In this folder, - - We've put a sample (read _fake_) ML model in the `model01.py` file. The `predict` function will be called by the Flask REST API to perform '_real-time_' prediction based on model parameter values that are stored in the database. **Important**: you would never want to hard code the model parameter weights directly in the prediction function. -1. The prediction route for the REST API is in `api/backend/customers/customer_routes.py`. Basically, it accepts two URL parameters and passes them to the `prediction` function in the `ml_models` module. The `prediction` route/function packages up the value(s) it receives from the model's `predict` function and send its back to Streamlit as JSON. -1. Back in streamlit, check out `app/src/pages/11_Prediction.py`. Here, I create two numeric input fields. When the button is pressed, it makes a request to the REST API URL `/c/prediction/.../...` function and passes the values from the two inputs as URL parameters. It gets back the results from the route and displays them. Nothing fancy here. \ No newline at end of file +## Contact & Support +For questions or bug reports, please open an issue in the GitHub repository or contact the system administrator (Ryan). \ No newline at end of file diff --git a/api/.env.template b/api/.env.template deleted file mode 100644 index 3a51ab40f9..0000000000 --- a/api/.env.template +++ /dev/null @@ -1,6 +0,0 @@ -SECRET_KEY=someCrazyS3cR3T!Key.! -DB_USER=root -DB_HOST=db -DB_PORT=3306 -DB_NAME=ngo_db -MYSQL_ROOT_PASSWORD= diff --git a/api/backend/consistent_tasks/consistent_tasks_routes.py b/api/backend/consistent_tasks/consistent_tasks_routes.py new file mode 100644 index 0000000000..93b2478f2e --- /dev/null +++ b/api/backend/consistent_tasks/consistent_tasks_routes.py @@ -0,0 +1,169 @@ +from flask import Blueprint, jsonify, request +from backend.db_connection import db +from mysql.connector import Error +from flask import current_app + +consistent_tasks = Blueprint("consistent_tasks", __name__) + +@consistent_tasks.route("/get_consistent_tasks", methods=["GET"]) +def get_all_tasks(): + try: + current_app.logger.info('Starting get_all_tasks request') + cursor = db.get_db().cursor() + + # Note: Query parameters are added after the main part of the URL. + # Here is an example: + # http://localhost:4000/ngo/ngos?founding_year=1971 + # founding_year is the query param. + + # Get query parameters for filtering + title = request.args.get("title") + category = request.args.get("category") + notes = request.args.get("notes") + + current_app.logger.debug(f'Query parameters - title: {title}, category: {category}, notes: {notes}') + + # Prepare the Base query + query = "SELECT * FROM consistent_tasks WHERE 1=1" + params = [] + + # Add filters if provided + if title: + query += " AND title = %s" + params.append(title) + if category: + query += " AND category = %s" + params.append(category) + if notes: + query += " AND notes = %s" + params.append(notes) + + current_app.logger.debug(f'Executing query: {query} with params: {params}') + cursor.execute(query, params) + results = cursor.fetchall() + + # Get column names to map to dictionaries + columns = [col[0] for col in cursor.description] + consistent_tasks = [dict(zip(columns, row)) for row in results] + cursor.close() + + current_app.logger.info(f'Successfully retrieved {len(consistent_tasks)} consistent tasks') + return jsonify(consistent_tasks), 200 + except Error as e: + current_app.logger.error(f'Database error in get_all_tasks: {str(e)}') + return jsonify({"error": str(e)}), 500 + +@consistent_tasks.route("/create_consistent_task", methods=["POST"]) +def create_task(): + try: + data = request.get_json() + + # Validate required fields + required_fields = ["userId", "title", "slug"] + for field in required_fields: + if field not in data: + return jsonify({"error": f"Missing required field: {field}"}), 400 + + cursor = db.get_db().cursor() + + query = """ + INSERT INTO consistent_tasks (userId, title, slug, category, notes) + VALUES (%s, %s, %s, %s, %s) + """ + cursor.execute( + query, + ( + data["userId"], + data["title"], + data["slug"], + data.get("category", None), + data.get("notes", None), + ), + ) + + db.get_db().commit() + new_task_id = cursor.lastrowid + cursor.close() + + return ( + jsonify({"message": "task created successfully", "task_id": new_task_id}), + 201, + ) + except Error as e: + return jsonify({"error": str(e)}), 500 + +@consistent_tasks.route("/delete_consistent_task/", methods = ["DELETE"]) +def delete_task(task_id): + try: + cursor = db.get_db().cursor() + + cursor.execute("SELECT * FROM consistent_tasks WHERE id = %s", (task_id,)) + task = cursor.fetchone() + if not task: + return jsonify({"error": "task not found"}), 404 + + cursor.execute("DELETE FROM consistent_tasks WHERE id = %s", (task_id,)) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "task deleted successfully"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + +@consistent_tasks.route("/consistent_task/", methods=["PUT"]) +def rename_task(task_id): + try: + data = request.get_json() + + # Check if NGO exists + cursor = db.get_db().cursor() + cursor.execute("SELECT * FROM consistent_tasks WHERE id = %s", (task_id,)) + if not cursor.fetchone(): + return jsonify({"error": "task not found"}), 404 + + # Build update query dynamically based on provided fields + update_fields = [] + params = [] + allowed_fields = ["title", "category", "notes"] + + for field in allowed_fields: + if field in data: + update_fields.append(f"{field} = %s") + params.append(data[field]) + + if not update_fields: + return jsonify({"error": "No valid fields to update"}), 400 + + params.append(task_id) + query = f"UPDATE consistent_tasks SET {', '.join(update_fields)} WHERE id = %s" + + cursor.execute(query, params) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "task updated successfully"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + +@consistent_tasks.route("/consistent_task/", methods=["GET"]) +def get_consistent_task(task_id): + try: + cursor = db.get_db().cursor() + + cursor.execute("SELECT * FROM consistent_tasks WHERE id = %s", (task_id,)) + task_row = cursor.fetchone() + + if not task_row: + return jsonify({"error": "user not found"}), 404 + + columns = [col[0] for col in cursor.description] + consistent_tasks = dict(zip(columns, task_row)) + + cursor.close() + return jsonify(consistent_tasks), 200 + + except Error as e: + return jsonify({"error": str(e)}), 500 + + + diff --git a/api/backend/customers/customer_routes.py b/api/backend/customers/customer_routes.py deleted file mode 100644 index 4fda460220..0000000000 --- a/api/backend/customers/customer_routes.py +++ /dev/null @@ -1,83 +0,0 @@ -######################################################## -# Sample customers blueprint of endpoints -# Remove this file if you are not using it in your project -######################################################## -from flask import Blueprint -from flask import request -from flask import jsonify -from flask import make_response -from flask import current_app -from backend.db_connection import db -from backend.ml_models.model01 import predict - -#------------------------------------------------------------ -# Create a new Blueprint object, which is a collection of -# routes. -customers = Blueprint('customers', __name__) - - -#------------------------------------------------------------ -# Get all customers from the system -@customers.route('/customers', methods=['GET']) -def get_customers(): - - cursor = db.get_db().cursor() - cursor.execute('''SELECT id, company, last_name, - first_name, job_title, business_phone FROM customers - ''') - - theData = cursor.fetchall() - - the_response = make_response(jsonify(theData)) - the_response.status_code = 200 - return the_response - -#------------------------------------------------------------ -# Update customer info for customer with particular userID -# Notice the manner of constructing the query. -@customers.route('/customers', methods=['PUT']) -def update_customer(): - current_app.logger.info('PUT /customers route') - cust_info = request.json - cust_id = cust_info['id'] - first = cust_info['first_name'] - last = cust_info['last_name'] - company = cust_info['company'] - - query = 'UPDATE customers SET first_name = %s, last_name = %s, company = %s where id = %s' - data = (first, last, company, cust_id) - cursor = db.get_db().cursor() - r = cursor.execute(query, data) - db.get_db().commit() - return 'customer updated!' - -#------------------------------------------------------------ -# Get customer detail for customer with particular userID -# Notice the manner of constructing the query. -@customers.route('/customers/', methods=['GET']) -def get_customer(userID): - current_app.logger.info('GET /customers/ route') - cursor = db.get_db().cursor() - cursor.execute('SELECT id, first_name, last_name FROM customers WHERE id = {0}'.format(userID)) - - theData = cursor.fetchall() - - the_response = make_response(jsonify(theData)) - the_response.status_code = 200 - return the_response - -#------------------------------------------------------------ -# Makes use of the very simple ML model in to predict a value -# and returns it to the user -@customers.route('/prediction//', methods=['GET']) -def predict_value(var01, var02): - current_app.logger.info(f'var01 = {var01}') - current_app.logger.info(f'var02 = {var02}') - - returnVal = predict(var01, var02) - return_dict = {'result': returnVal} - - the_response = make_response(jsonify(return_dict)) - the_response.status_code = 200 - the_response.mimetype = 'application/json' - return the_response \ No newline at end of file diff --git a/api/backend/daily_tasks/daily_tasks_routes.py b/api/backend/daily_tasks/daily_tasks_routes.py new file mode 100644 index 0000000000..f35a1d5f0c --- /dev/null +++ b/api/backend/daily_tasks/daily_tasks_routes.py @@ -0,0 +1,168 @@ +from flask import Blueprint, jsonify, request +from backend.db_connection import db +from mysql.connector import Error +from flask import current_app + +daily_tasks = Blueprint("daily_tasks", __name__) + +@daily_tasks.route("/get_daily_tasks", methods=["GET"]) +def get_all_tasks(): + try: + current_app.logger.info('Starting get_all_tasks request') + cursor = db.get_db().cursor() + + # Note: Query parameters are added after the main part of the URL. + # Here is an example: + # http://localhost:4000/ngo/ngos?founding_year=1971 + # founding_year is the query param. + + # Get query parameters for filtering + title = request.args.get("title") + notes = request.args.get("notes") + + current_app.logger.debug(f'Query parameters - title: {title}, notes: {notes}') + + # Prepare the Base query + query = "SELECT * FROM daily_tasks WHERE 1=1" + params = [] + + # Add filters if provided + if title: + query += " AND title = %s" + params.append(title) + if notes: + query += " AND notes = %s" + params.append(notes) + + current_app.logger.debug(f'Executing query: {query} with params: {params}') + cursor.execute(query, params) + results = cursor.fetchall() + + # Get column names to map to dictionaries + columns = [col[0] for col in cursor.description] + daily_tasks = [dict(zip(columns, row)) for row in results] + cursor.close() + + current_app.logger.info(f'Successfully retrieved {len(daily_tasks)} daily tasks') + return jsonify(daily_tasks), 200 + except Error as e: + current_app.logger.error(f'Database error in get_all_tasks: {str(e)}') + return jsonify({"error": str(e)}), 500 + +@daily_tasks.route("/create_daily_task", methods=["POST"]) +def create_task(): + try: + data = request.get_json() + + # Validate required fields + required_fields = ["userId", "tagId", "title", "slug", "status", "completed"] + for field in required_fields: + if field not in data: + return jsonify({"error": f"Missing required field: {field}"}), 400 + + cursor = db.get_db().cursor() + + query = """ + INSERT INTO daily_tasks (userId, tagId, title, slug, status, completed, schedule, notes) + VALUES (%s, %s, %s, %s, %s, %s, %s, %s) + """ + cursor.execute( + query, + ( + data["userId"], + data["tagId"], + data["title"], + data["slug"], + data["status"], + data["completed"], + data.get("schedule", None), + data.get("notes", None) + ), + ) + + db.get_db().commit() + new_task_id = cursor.lastrowid + cursor.close() + + return ( + jsonify({"message": "task created successfully", "task_id": new_task_id}), + 201, + ) + except Error as e: + return jsonify({"error": str(e)}), 500 + +@daily_tasks.route("/delete_daily_task/", methods = ["DELETE"]) +def delete_task(task_id): + try: + cursor = db.get_db().cursor() + + cursor.execute("SELECT * FROM daily_tasks WHERE id = %s", (task_id,)) + task = cursor.fetchone() + if not task: + return jsonify({"error": "task not found"}), 404 + + cursor.execute("DELETE FROM daily_tasks WHERE id = %s", (task_id,)) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "task deleted successfully"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + +@daily_tasks.route("/daily_task/", methods=["PUT"]) +def rename_task(task_id): + try: + data = request.get_json() + + # Check if NGO exists + cursor = db.get_db().cursor() + cursor.execute("SELECT * FROM daily_tasks WHERE id = %s", (task_id,)) + if not cursor.fetchone(): + return jsonify({"error": "task not found"}), 404 + + # Build update query dynamically based on provided fields + update_fields = [] + params = [] + allowed_fields = ["title", "completed", "notes"] + + for field in allowed_fields: + if field in data: + update_fields.append(f"{field} = %s") + params.append(data[field]) + + if not update_fields: + return jsonify({"error": "No valid fields to update"}), 400 + + params.append(task_id) + query = f"UPDATE daily_tasks SET {', '.join(update_fields)} WHERE id = %s" + + cursor.execute(query, params) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "task updated successfully"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + +@daily_tasks.route("/daily_task/", methods=["GET"]) +def get_daily_task(task_id): + try: + cursor = db.get_db().cursor() + + cursor.execute("SELECT * FROM daily_tasks WHERE id = %s", (task_id,)) + task_row = cursor.fetchone() + + if not task_row: + return jsonify({"error": "user not found"}), 404 + + columns = [col[0] for col in cursor.description] + daily_tasks = dict(zip(columns, task_row)) + + cursor.close() + return jsonify(daily_tasks), 200 + + except Error as e: + return jsonify({"error": str(e)}), 500 + + + diff --git a/api/backend/goals/goal_routes.py b/api/backend/goals/goal_routes.py new file mode 100644 index 0000000000..41facdb62b --- /dev/null +++ b/api/backend/goals/goal_routes.py @@ -0,0 +1,181 @@ +from flask import Blueprint, jsonify, request +from backend.db_connection import db +from mysql.connector import Error +from flask import current_app +from datetime import datetime + +goals = Blueprint("goals", __name__) + +@goals.route("/active", methods=["GET"]) +def get_active_goals(): + try: + cursor = db.get_db().cursor() + query = "SELECT id, title, notes, schedule FROM goals g WHERE g.status = 'ACTIVE';" + cursor.execute(query) + goals_data = cursor.fetchall() + cursor.close() + return jsonify(goals_data), 200 + + except Error as e: + current_app.logger.error(f'Database error in get_active_goals: {str(e)}') + return jsonify({"error": str(e)}), 500 + +# Route used on Dr. Alan's home page to get active goals with priority. +@goals.route("/user//active_and_priority", methods=["GET"]) +def get_user_active_goals_with_priority(user_id): + try: + cursor = db.get_db().cursor() + cursor.execute( + "SELECT id, title, notes, priority, completed " + "FROM goals g " + "WHERE g.status = 'ACTIVE' AND g.userId = %s;", + (user_id,) + ) + goals_data = cursor.fetchall() + cursor.close() + return jsonify(goals_data), 200 + + except Error as e: + current_app.logger.error(f'Database error in get_user_active_goals_with_priority: {str(e)}') + return jsonify({"error": str(e)}), 500 + + + +@goals.route("/archive", methods=["GET"]) +def get_archive(): + try: + cursor = db.get_db().cursor() + query = "SELECT id, title, notes, schedule, completedAt FROM goals g WHERE g.status = 'ARCHIVED';" + cursor.execute(query) + goals_data = cursor.fetchall() + cursor.close() + return jsonify(goals_data), 200 + + except Error as e: + current_app.logger.error(f'Database error in get_active_goals: {str(e)}') + return jsonify({"error": str(e)}), 500 + +@goals.route("/all", methods=["GET"]) +def get_all_goals(): + try: + cursor = db.get_db().cursor() + query = "SELECT id, title, notes, schedule, status FROM goals g;" + cursor.execute(query) + goals_data = cursor.fetchall() + cursor.close() + return jsonify(goals_data), 200 + + except Error as e: + current_app.logger.error(f'Database error in get_all_goals: {str(e)}') + return jsonify({"error": str(e)}), 500 + +@goals.route("/subgoals", methods=["GET"]) +def get_subgoal(): + try: + current_app.logger.info('Starting get_all_goals request') + cursor = db.get_db().cursor() + query = "SELECT sg.goalsId, sg.title FROM subgoals sg;" + current_app.logger.debug(f'Executing query: {query}') + cursor.execute(query) + goals_data = cursor.fetchall() + cursor.close() + + current_app.logger.info(f'Successfully retrieved {len(goals_data)} NGOs') + return jsonify(goals_data), 200 + + except Error as e: + current_app.logger.error(f'Database error in get_all_ngos: {str(e)}') + return jsonify({"error": str(e)}), 500 + + +@goals.route("//complete", methods=["PUT"]) +def mark_goal_complete(goal_id): + try: + cursor = db.get_db().cursor() + cursor.execute("SELECT * FROM goals WHERE id = %s", (goal_id,)) + goal = cursor.fetchone() + if not goal: + return jsonify({"error": "Goal not found"}), 404 + # Update goal status to completed (1) + cursor.execute("UPDATE goals SET completed = 1, status = 'ARCHIVED', completedAt = NOW() WHERE id = %s", (goal_id,)) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "Goal marked as completed successfully"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + + +@goals.route("//delete", methods=["DELETE"]) +def delete_goal(goal_id): + try: + cursor = db.get_db().cursor() + cursor.execute("SELECT * FROM goals WHERE id = %s", (goal_id,)) + goal = cursor.fetchone() + if not goal: + return jsonify({"error": "Goal not found"}), 404 + # Update goal status to completed (1) + cursor.execute("DELETE FROM goals WHERE id = %s", (goal_id,)) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "Goal deleted"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + +@goals.route("/create", methods=["POST"]) +def add_goal(): + try: + data = request.get_json() + user_id = data.get("userID") + title = data.get("title") + notes = data.get("notes") + status = data.get("status", "ACTIVE") + priority = data.get("priority", "low") + schedule = data.get("schedule") # YYYY-MM-DD + + if not data: + return jsonify({"error": "No data provided"}), 400 + + user_id = data.get("userID") + title = data.get("title") + + if not user_id or not title: + return jsonify({"error": "userID and title are required"}), 400 + if schedule: + try: + datetime.strptime(schedule, '%Y-%m-%d') + except ValueError: + return jsonify({"error": "Invalid date format. Use YYYY-MM-DD"}), 400 + + cursor = db.get_db().cursor() + query = """ + INSERT INTO goals (userId, title, notes, status, priority, schedule) + VALUES (%s, %s, %s, %s, %s, %s) + """ + cursor.execute(query, (user_id, title, notes, status, priority, schedule)) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "Goal added successfully"}), 200 + + except Error as e: + return jsonify({"error": str(e)}), 500 + +# Abandoned as we dont have the data to back this upn :( +# @goals.route("/active/employees", methods=["GET"]) +# def get_active_goals(): +# try: +# cursor = db.get_db().cursor() +# query = """SELECT g.id, g.title, g.notes, g.schedule, u.id, u. +# FROM goals g JOIN users u ON g.userId = u.id +# WHERE g.status = 'ACTIVE' AND +# GROUPBY u.id;""" +# cursor.execute(query) +# goals_data = cursor.fetchall() +# cursor.close() +# return jsonify(goals_data), 200 + +# except Error as e: +# current_app.logger.error(f'Database error in get_active_goals: {str(e)}') +# return jsonify({"error": str(e)}), 500 \ No newline at end of file diff --git a/api/backend/ngos/ngo_routes.py b/api/backend/goals/ngo_routes.py similarity index 100% rename from api/backend/ngos/ngo_routes.py rename to api/backend/goals/ngo_routes.py diff --git a/api/backend/habits/habit_routes.py b/api/backend/habits/habit_routes.py new file mode 100644 index 0000000000..e83ed65209 --- /dev/null +++ b/api/backend/habits/habit_routes.py @@ -0,0 +1,35 @@ +from flask import Blueprint, jsonify, request +from backend.db_connection import db +from mysql.connector import Error +from flask import current_app +from datetime import datetime + +habits = Blueprint("habits", __name__) + +@habits.route("/create", methods=["POST"]) +def add_habit(): + try: + data = request.get_json() + uid = data.get("uid") + title = data.get("title") + notes = data.get("notes") + + if not data: + return jsonify({"error": "No data provided"}), 400 + + if not uid or not title: + return jsonify({"error": "userID and title are required"}), 400 + + cursor = db.get_db().cursor() + query = """ + INSERT INTO daily_tasks (userId, title, notes, createdAt) + VALUES (%s, %s, %s, NOW()) + """ + cursor.execute(query, (uid, title, notes)) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "Successfully logged!"}), 200 + + except Exception as e: + return jsonify({"error": str(e)}), 500 \ No newline at end of file diff --git a/api/backend/ml_models/__init__.py b/api/backend/ml_models/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/api/backend/ml_models/model01.py b/api/backend/ml_models/model01.py deleted file mode 100644 index d5c5a3070f..0000000000 --- a/api/backend/ml_models/model01.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -model01.py is an example of how to access model parameter values that you are storing -in the database and use them to make a prediction when a route associated with prediction is -accessed. -""" -from backend.db_connection import db -import numpy as np -# import logging - -from flask import current_app - -def train(): - """ - You could have a function that performs training from scratch as well as testing (see below). - It could be activated from a route for an "administrator role" or something similar. - """ - return 'Training the model' - -def test(): - return 'Testing the model' - -def predict(var01, var02): - """ - Retreives model parameters from the database and uses them for real-time prediction - """ - # get a database cursor - cursor = db.get_db().cursor() - # get the model params from the database - query = 'SELECT beta_vals FROM model1_params ORDER BY sequence_number DESC LIMIT 1' - cursor.execute(query) - return_val = cursor.fetchone() - params = return_val['beta_vals'] - - # turn the values from the database into a numpy array - params_array = np.array(list(map(float, params[1:-1].split(',')))) - current_app.logger.info(f'params array = {params_array}') - - # turn the variables sent from the UI into a numpy array - input_array = np.array([1.0, float(var01), float(var02)]) - - # calculate the dot product (since this is a fake regression) - prediction = np.dot(params_array, input_array) - - return prediction - diff --git a/api/backend/products/products_routes.py b/api/backend/products/products_routes.py deleted file mode 100644 index a3e596d0d3..0000000000 --- a/api/backend/products/products_routes.py +++ /dev/null @@ -1,208 +0,0 @@ -######################################################## -# Sample customers blueprint of endpoints -# Remove this file if you are not using it in your project -######################################################## - -from flask import Blueprint -from flask import request -from flask import jsonify -from flask import make_response -from flask import current_app -from backend.db_connection import db - -#------------------------------------------------------------ -# Create a new Blueprint object, which is a collection of -# routes. -products = Blueprint('products', __name__) - -#------------------------------------------------------------ -# Get all the products from the database, package them up, -# and return them to the client -@products.route('/products', methods=['GET']) -def get_products(): - query = ''' - SELECT id, - product_code, - product_name, - list_price, - category - FROM products - ''' - - # get a cursor object from the database - cursor = db.get_db().cursor() - - # use cursor to query the database for a list of products - cursor.execute(query) - - # fetch all the data from the cursor - # The cursor will return the data as a - # Python Dictionary - theData = cursor.fetchall() - - # Create a HTTP Response object and add results of the query to it - # after "jasonify"-ing it. - response = make_response(jsonify(theData)) - # set the proper HTTP Status code of 200 (meaning all good) - response.status_code = 200 - # send the response back to the client - return response - -# ------------------------------------------------------------ -# get product information about a specific product -# notice that the route takes and then you see id -# as a parameter to the function. This is one way to send -# parameterized information into the route handler. -@products.route('/product/', methods=['GET']) -def get_product_detail (id): - - query = f'''SELECT id, - product_name, - description, - list_price, - category - FROM products - WHERE id = {str(id)} - ''' - - # logging the query for debugging purposes. - # The output will appear in the Docker logs output - # This line has nothing to do with actually executing the query... - # It is only for debugging purposes. - current_app.logger.info(f'GET /product/ query={query}') - - # get the database connection, execute the query, and - # fetch the results as a Python Dictionary - cursor = db.get_db().cursor() - cursor.execute(query) - theData = cursor.fetchall() - - # Another example of logging for debugging purposes. - # You can see if the data you're getting back is what you expect. - current_app.logger.info(f'GET /product/ Result of query = {theData}') - - response = make_response(jsonify(theData)) - response.status_code = 200 - return response - -# ------------------------------------------------------------ -# Get the top 5 most expensive products from the database -@products.route('/mostExpensive') -def get_most_pop_products(): - - query = ''' - SELECT product_code, - product_name, - list_price, - reorder_level - FROM products - ORDER BY list_price DESC - LIMIT 5 - ''' - - # Same process as handler above - cursor = db.get_db().cursor() - cursor.execute(query) - theData = cursor.fetchall() - - response = make_response(jsonify(theData)) - response.status_code = 200 - return response - -# ------------------------------------------------------------ -# Route to get the 10 most expensive items from the -# database. -@products.route('/tenMostExpensive', methods=['GET']) -def get_10_most_expensive_products(): - - query = ''' - SELECT product_code, - product_name, - list_price, - reorder_level - FROM products - ORDER BY list_price DESC - LIMIT 10 - ''' - - # Same process as above - cursor = db.get_db().cursor() - cursor.execute(query) - theData = cursor.fetchall() - - response = make_response(jsonify(theData)) - response.status_code = 200 - return response - - -# ------------------------------------------------------------ -# This is a POST route to add a new product. -# Remember, we are using POST routes to create new entries -# in the database. -@products.route('/product', methods=['POST']) -def add_new_product(): - - # In a POST request, there is a - # collecting data from the request object - the_data = request.json - current_app.logger.info(the_data) - - #extracting the variable - name = the_data['product_name'] - description = the_data['product_description'] - price = the_data['product_price'] - category = the_data['product_category'] - - query = f''' - INSERT INTO products (product_name, - description, - category, - list_price) - VALUES ('{name}', '{description}', '{category}', {str(price)}) - ''' - # TODO: Make sure the version of the query above works properly - # Constructing the query - # query = 'insert into products (product_name, description, category, list_price) values ("' - # query += name + '", "' - # query += description + '", "' - # query += category + '", ' - # query += str(price) + ')' - current_app.logger.info(query) - - # executing and committing the insert statement - cursor = db.get_db().cursor() - cursor.execute(query) - db.get_db().commit() - - response = make_response("Successfully added product") - response.status_code = 200 - return response - -# ------------------------------------------------------------ -### Get all product categories -@products.route('/categories', methods = ['GET']) -def get_all_categories(): - query = ''' - SELECT DISTINCT category AS label, category as value - FROM products - WHERE category IS NOT NULL - ORDER BY category - ''' - - cursor = db.get_db().cursor() - cursor.execute(query) - theData = cursor.fetchall() - - response = make_response(jsonify(theData)) - response.status_code = 200 - return response - -# ------------------------------------------------------------ -# This is a stubbed route to update a product in the catalog -# The SQL query would be an UPDATE. -@products.route('/product', methods = ['PUT']) -def update_product(): - product_info = request.json - current_app.logger.info(product_info) - - return "Success" \ No newline at end of file diff --git a/api/backend/rest_entry.py b/api/backend/rest_entry.py index 2bba27f8a1..dedd880c76 100644 --- a/api/backend/rest_entry.py +++ b/api/backend/rest_entry.py @@ -5,8 +5,14 @@ from logging.handlers import RotatingFileHandler from backend.db_connection import db -from backend.simple.simple_routes import simple_routes -from backend.ngos.ngo_routes import ngos +from backend.goals.goal_routes import goals +from backend.users.users_routes import users +from backend.support.support_routes import support +from backend.tags.tags_routes import tags +from backend.daily_tasks.daily_tasks_routes import daily_tasks +from backend.consistent_tasks.consistent_tasks_routes import consistent_tasks +from backend.habits.habit_routes import habits + def create_app(): app = Flask(__name__) @@ -45,8 +51,13 @@ def create_app(): # Register the routes from each Blueprint with the app object # and give a url prefix to each app.logger.info("create_app(): registering blueprints with Flask app object.") - app.register_blueprint(simple_routes) - app.register_blueprint(ngos, url_prefix="/ngo") + app.register_blueprint(goals, url_prefix="/goals") + app.register_blueprint(users, url_prefix="/users") + app.register_blueprint(support, url_prefix="/support") + app.register_blueprint(tags, url_prefix="/tags") + app.register_blueprint(daily_tasks, url_prefix="/daily_tasks") + app.register_blueprint(consistent_tasks, url_prefix="/consistent_tasks") + app.register_blueprint(habits, url_prefix="/habits") # Don't forget to return the app object return app @@ -86,4 +97,6 @@ def setup_logging(app): # Set the base logging level to DEBUG to capture everything app.logger.setLevel(logging.DEBUG) - app.logger.info('API startup') \ No newline at end of file + app.logger.info('API startup') + + \ No newline at end of file diff --git a/api/backend/simple/playlist.py b/api/backend/simple/playlist.py deleted file mode 100644 index a9e7a9ef03..0000000000 --- a/api/backend/simple/playlist.py +++ /dev/null @@ -1,129 +0,0 @@ -# ------------------------------------------------------------ -# Sample data for testing generated by ChatGPT -# ------------------------------------------------------------ - -sample_playlist_data = { - "playlist": { - "id": "37i9dQZF1DXcBWIGoYBM5M", - "name": "Chill Hits", - "description": "Relax and unwind with the latest chill hits.", - "owner": { - "id": "spotify_user_123", - "display_name": "Spotify User" - }, - "tracks": { - "items": [ - { - "track": { - "id": "3n3Ppam7vgaVa1iaRUc9Lp", - "name": "Lose Yourself", - "artists": [ - { - "id": "1dfeR4HaWDbWqFHLkxsg1d", - "name": "Eminem" - } - ], - "album": { - "id": "1ATL5GLyefJaxhQzSPVrLX", - "name": "8 Mile" - }, - "duration_ms": 326000, - "track_number": 1, - "disc_number": 1, - "preview_url": "https://p.scdn.co/mp3-preview/lose-yourself.mp3", - "uri": "spotify:track:3n3Ppam7vgaVa1iaRUc9Lp" - } - }, - { - "track": { - "id": "7ouMYWpwJ422jRcDASZB7P", - "name": "Blinding Lights", - "artists": [ - { - "id": "0fW8E0XdT6aG9aFh6jGpYo", - "name": "The Weeknd" - } - ], - "album": { - "id": "1ATL5GLyefJaxhQzSPVrLX", - "name": "After Hours" - }, - "duration_ms": 200040, - "track_number": 9, - "disc_number": 1, - "preview_url": "https://p.scdn.co/mp3-preview/blinding-lights.mp3", - "uri": "spotify:track:7ouMYWpwJ422jRcDASZB7P" - } - }, - { - "track": { - "id": "4uLU6hMCjMI75M1A2tKUQC", - "name": "Shape of You", - "artists": [ - { - "id": "6eUKZXaKkcviH0Ku9w2n3V", - "name": "Ed Sheeran" - } - ], - "album": { - "id": "3fMbdgg4jU18AjLCKBhRSm", - "name": "Divide" - }, - "duration_ms": 233713, - "track_number": 4, - "disc_number": 1, - "preview_url": "https://p.scdn.co/mp3-preview/shape-of-you.mp3", - "uri": "spotify:track:4uLU6hMCjMI75M1A2tKUQC" - } - }, - { - "track": { - "id": "0VjIjW4GlUZAMYd2vXMi3b", - "name": "Levitating", - "artists": [ - { - "id": "4tZwfgrHOc3mvqYlEYSvVi", - "name": "Dua Lipa" - } - ], - "album": { - "id": "7dGJo4pcD2V6oG8kP0tJRR", - "name": "Future Nostalgia" - }, - "duration_ms": 203693, - "track_number": 5, - "disc_number": 1, - "preview_url": "https://p.scdn.co/mp3-preview/levitating.mp3", - "uri": "spotify:track:0VjIjW4GlUZAMYd2vXMi3b" - } - }, - { - "track": { - "id": "6habFhsOp2NvshLv26DqMb", - "name": "Sunflower", - "artists": [ - { - "id": "1dfeR4HaWDbWqFHLkxsg1d", - "name": "Post Malone" - }, - { - "id": "0C8ZW7ezQVs4URX5aX7Kqx", - "name": "Swae Lee" - } - ], - "album": { - "id": "6k3hyp4efgfHP5GMVd3Agw", - "name": "Spider-Man: Into the Spider-Verse (Soundtrack)" - }, - "duration_ms": 158000, - "track_number": 3, - "disc_number": 1, - "preview_url": "https://p.scdn.co/mp3-preview/sunflower.mp3", - "uri": "spotify:track:6habFhsOp2NvshLv26DqMb" - } - } - ] - }, - "uri": "spotify:playlist:37i9dQZF1DXcBWIGoYBM5M" - } -} \ No newline at end of file diff --git a/api/backend/simple/simple_routes.py b/api/backend/simple/simple_routes.py deleted file mode 100644 index a753d14c50..0000000000 --- a/api/backend/simple/simple_routes.py +++ /dev/null @@ -1,98 +0,0 @@ -from flask import ( - Blueprint, - request, - jsonify, - make_response, - current_app, - redirect, - url_for, -) -import json -from backend.db_connection import db -from backend.simple.playlist import sample_playlist_data -from backend.ml_models import model01 - -# This blueprint handles some basic routes that you can use for testing -simple_routes = Blueprint("simple_routes", __name__) - - -# ------------------------------------------------------------ -# / is the most basic route -# Once the api container is started, in a browser, go to -# localhost:4000/playlist -@simple_routes.route("/") -def welcome(): - current_app.logger.info("GET / handler") - welcome_message = "

Welcome to the CS 3200 Project Template REST API" - response = make_response(welcome_message) - response.status_code = 200 - return response - - -# ------------------------------------------------------------ -# /playlist returns the sample playlist data contained in playlist.py -# (imported above) -@simple_routes.route("/playlist") -def get_playlist_data(): - current_app.logger.info("GET /playlist handler") - response = make_response(jsonify(sample_playlist_data)) - response.status_code = 200 - return response - - -# ------------------------------------------------------------ -@simple_routes.route("/niceMesage", methods=["GET"]) -def affirmation(): - message = """ -

Think about it...

-
- You only need to be 1% better today than you were yesterday! - """ - response = make_response(message) - response.status_code = 200 - return response - - -# ------------------------------------------------------------ -# Demonstrates how to redirect from one route to another. -@simple_routes.route("/message") -def mesage(): - return redirect(url_for(affirmation)) - - -@simple_routes.route("/data") -def getData(): - current_app.logger.info("GET /data handler") - - # Create a simple dictionary with nested data - data = {"a": {"b": "123", "c": "Help"}, "z": {"b": "456", "c": "me"}} - - response = make_response(jsonify(data)) - response.status_code = 200 - return response - - -@simple_routes.route("/prediction//", methods=["GET"]) -def get_prediction(var_01, var_02): - current_app.logger.info("GET /prediction handler") - - try: - # Call prediction function from model01 - prediction = model01.predict(var_01, var_02) - current_app.logger.info(f"prediction value returned is {prediction}") - - response_data = { - "prediction": prediction, - "input_variables": {"var01": var_01, "var02": var_02}, - } - - response = make_response(jsonify(response_data)) - response.status_code = 200 - return response - - except Exception as e: - response = make_response( - jsonify({"error": "Error processing prediction request"}) - ) - response.status_code = 500 - return response diff --git a/api/backend/support/support_routes.py b/api/backend/support/support_routes.py new file mode 100644 index 0000000000..9f6225da09 --- /dev/null +++ b/api/backend/support/support_routes.py @@ -0,0 +1,189 @@ +from flask import Blueprint, jsonify, request +from backend.db_connection import db +from mysql.connector import Error +from flask import current_app + +support = Blueprint("support", __name__) + +@support.route("/bugs", methods=["GET"]) +def get_appstats(): + try: + cursor = db.get_db().cursor() + query = "SELECT title, description, id, priority, completed FROM bug_reports WHERE completed = 0;" + cursor.execute(query) + stats = cursor.fetchall() + cursor.close() + return jsonify(stats), 200 + + except Error as e: + return jsonify({"error": str(e)}), 500 + + +@support.route("/bugs//complete", methods=["PUT"]) +def mark_bug_complete(bug_id): + try: + cursor = db.get_db().cursor() + cursor.execute("SELECT * FROM bug_reports WHERE id = %s", (bug_id,)) + bug = cursor.fetchone() + if not bug: + return jsonify({"error": "Bug not found"}), 404 + # Update bug status to completed (1) + cursor.execute("UPDATE bug_reports SET completed = 1 WHERE id = %s", (bug_id,)) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "Bug marked as completed successfully"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + + +support.route("/bug_reports", methods=["GET"]) +def get_bug_reports(): + try: + current_app.logger.info('Starting get_bug_reports request') + cursor = db.get_db().cursor() + + # Get query parameters for filtering + userId = request.args.get("userId") + title = request.args.get("title") + description = request.args.get("description") + status = request.args.get("status") + priority = request.args.get("priority") + + current_app.logger.debug(f'Query parameters - userId: {userId}, title: {title}, description: {description}, status: {status}, priority: {priority}') + + # Prepare the Base query + query = "SELECT * FROM bug_reports WHERE 1=1" + params = [] + + # Add filters if provided + if userId: + query += " AND userId = %s" + params.append(userId) + if title: + query += " AND title = %s" + params.append(title) + if description: + query += " AND description = %s" + params.append(description) + if status: + query += " AND status = %s" + params.append(status) + if priority: + query += " AND priority = %s" + params.append(priority) + + current_app.logger.debug(f'Executing query: {query} with params: {params}') + cursor.execute(query, params) + results = cursor.fetchall() + + # Get column names to map to dictionaries + columns = [col[0] for col in cursor.description] + bug_reports = [dict(zip(columns, row)) for row in results] + cursor.close() + + current_app.logger.info(f'Successfully retrieved {len(bug_reports)} bug reports') + return jsonify(bug_reports), 200 + except Error as e: + current_app.logger.error(f'Database error in get_bug_reports: {str(e)}') + return jsonify({"error": str(e)}), 500 + +@support.route("/bug_reports/", methods=["PUT"]) +def archive_bug_report(bug_report_id): + try: + data = request.get_json() + + cursor = db.get_db().cursor() + cursor.execute("SELECT * FROM bug_reports WHERE id = %s", (bug_report_id,)) + if not cursor.fetchone(): + return jsonify({"error": "bug report not found"}), 404 + + query = 'UPDATE bug_reports SET status = 1 WHERE id = %s' + cursor.execute(query, (bug_report_id,)) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "bug report updated successfully"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + +@support.route("/post_reply/", methods=["GET"]) +def get_post_replies(user_id): + try: + cursor = db.get_db().cursor() + + # Get NGO details + cursor.execute("SELECT * FROM post_reply WHERE userId = %s", (user_id,)) + post_rows = cursor.fetchall() + + if not post_rows: + return jsonify({"error": "no replies from user"}), 404 + + columns = [col[0] for col in cursor.description] + post = dict(zip(columns, post_rows)) + + cursor.close() + return jsonify(post), 200 + + except Error as e: + return jsonify({"error": str(e)}), 500 + +@support.route("/post_reply", methods=["POST"]) +def create_post_reply(): + try: + data = request.get_json() + + # Validate required fields + required_fields = ["userId", "postId", "title", "createdAt", "tag"] + for field in required_fields: + if field not in data: + return jsonify({"error": f"Missing required field: {field}"}), 400 + + cursor = db.get_db().cursor() + + query = """ + INSERT INTO post_reply (userId, postId, title, createdAt, publishedAt, content, tag) + VALUES (%s, %s, %s, %s, %s, %s, %s) + """ + cursor.execute( + query, + ( + data["userId"], + data["postId"], + data["title"], + data["createdAt"], + data.get("publishedAt"), + data.get("content"), + data["tag"] + ), + ) + + db.get_db().commit() + new_reply_id = cursor.lastrowid + cursor.close() + + return ( + jsonify({"message": "post reply created successfully", "reply_id": new_reply_id}), + 201, + ) + except Error as e: + return jsonify({"error": str(e)}), 500 + +@support.route("/post_reply/", methods = ["DELETE"]) +def delete_tags(post_reply_id): + try: + cursor = db.get_db().cursor() + + cursor.execute("SELECT * FROM post_reply WHERE id = %s", (post_reply_id,)) + support = cursor.fetchone() + if not support: + return jsonify({"error": "post reply not found"}), 404 + + cursor.execute("DELETE FROM post_reply WHERE id = %s", (post_reply_id,)) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "post reply deleted successfully"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + diff --git a/api/backend/tags/tags_routes.py b/api/backend/tags/tags_routes.py new file mode 100644 index 0000000000..ef14629f8d --- /dev/null +++ b/api/backend/tags/tags_routes.py @@ -0,0 +1,163 @@ +from flask import Blueprint, jsonify, request +from backend.db_connection import db +from mysql.connector import Error +from flask import current_app + +tags = Blueprint("tags", __name__) + +@tags.route("/get_tag", methods=["GET"]) +def get_all_tags(): + try: + current_app.logger.info('Starting get_all_tags request') + cursor = db.get_db().cursor() + + # Get query parameters for filtering + name = request.args.get("name") + color = request.args.get("color") + + current_app.logger.debug(f'Query parameters - name: {name}, color: {color}') + + # Prepare the Base query + query = "SELECT * FROM tags WHERE 1=1" + params = [] + + # Add filters if provided + if name: + query += " AND name = %s" + params.append(name) + if color: + query += " AND color = %s" + params.append(color) + + current_app.logger.debug(f'Executing query: {query} with params: {params}') + cursor.execute(query, params) + results = cursor.fetchall() + + # Get column names to map to dictionaries + columns = [col[0] for col in cursor.description] + tags_list = [dict(zip(columns, row)) for row in results] + cursor.close() + + current_app.logger.info(f'Successfully retrieved {len(tags_list)} tags') + return jsonify(tags_list), 200 + except Error as e: + current_app.logger.error(f'Database error in get_all_tags: {str(e)}') + return jsonify({"error": str(e)}), 500 + +@tags.route("/create_tag", methods=["POST"]) +def create_tag(): + try: + data = request.get_json() + + # Validate required fields + required_fields = ["color"] + for field in required_fields: + if field not in data: + return jsonify({"error": f"Missing required field: {field}"}), 400 + + cursor = db.get_db().cursor() + + query = """ + INSERT INTO tags (name, color) + VALUES (%s, %s) + """ + cursor.execute( + query, + ( + data.get("name"), + data.get("color"), # <-- FIXED + ), + ) + + db.get_db().commit() + new_tag_id = cursor.lastrowid + cursor.close() + + return ( + jsonify({"message": "tag created successfully", "tag_id": new_tag_id}), + 201, + ) + except Error as e: + return jsonify({"error": str(e)}), 500 + +@tags.route("/delete_tag/", methods = ["DELETE"]) +def delete_tags(tag_id): + try: + cursor = db.get_db().cursor() + + cursor.execute("SELECT * FROM tags WHERE id = %s", (tag_id,)) + tag = cursor.fetchone() + if not tag: + return jsonify({"error": "tag not found"}), 404 + + cursor.execute("DELETE FROM tags WHERE id = %s", (tag_id,)) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "tag deleted successfully"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + +@tags.route("/rename_tag/", methods=["PUT"]) +def rename_tag(tag_id): + try: + data = request.get_json() + + # Check if NGO exists + cursor = db.get_db().cursor() + cursor.execute("SELECT * FROM tags WHERE id = %s", (tag_id,)) + if not cursor.fetchone(): + return jsonify({"error": "tag not found"}), 404 + + # Build update query dynamically based on provided fields + update_fields = [] + params = [] + allowed_fields = ["name", "color"] + + for field in allowed_fields: + if field in data: + update_fields.append(f"{field} = %s") + params.append(data[field]) + + if not update_fields: + return jsonify({"error": "No valid fields to update"}), 400 + + params.append(tag_id) + query = f"UPDATE tags SET {', '.join(update_fields)} WHERE id = %s" + + cursor.execute(query, params) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "tag updated successfully"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + +@tags.route("/tags/", methods=["GET"]) # <-- FIXED leading slash +def get_tag(tag_id): + try: + cursor = db.get_db().cursor() + cursor.execute("SELECT * FROM tags WHERE id = %s", (tag_id,)) + tag_row = cursor.fetchone() + + if not tag_row: + return jsonify({"error": "tag not found"}), 404 + + columns = [col[0] for col in cursor.description] + tag = dict(zip(columns, tag_row)) + + cursor.close() + return jsonify(tag), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + +@tags.route("/goals//tags", methods=["GET"]) +def get_goal_tags(goal_id): + cursor = db.get_db().cursor() + cursor.execute(""" + SELECT t.id, t.name, t.color + FROM tags t + JOIN goals_tags gt ON gt.tag_id = t.id + WHERE gt.goal_id = %s + """, (goal_id,)) + return jsonify(cursor.fetchall()) diff --git a/api/backend/users/users_routes.py b/api/backend/users/users_routes.py new file mode 100644 index 0000000000..2c1eea96c0 --- /dev/null +++ b/api/backend/users/users_routes.py @@ -0,0 +1,174 @@ +from flask import Blueprint, jsonify, request +from backend.db_connection import db +from mysql.connector import Error +from flask import current_app + +users = Blueprint("users", __name__) + +@users.route("/appstats", methods=["GET"]) +def get_appstats(): + try: + cursor = db.get_db().cursor() + query = "SELECT userId, registeredAt FROM user_data;" + cursor.execute(query) + stats = cursor.fetchall() + cursor.close() + return jsonify(stats), 200 + + except Error as e: + return jsonify({"error": str(e)}), 500 + + +@users.route("/users", methods=["GET"]) +def get_all_users(): + try: + current_app.logger.info('Starting get_all_users request') + cursor = db.get_db().cursor() + + # Get query parameters for filtering + firstname = request.args.get("firstName") + lastname = request.args.get("lastName") + phonenumber = request.args.get("phoneNumber") + email = request.args.get("email") + role = request.args.get("role") + + + + current_app.logger.debug(f'Query parameters - firstName: {firstname}, lastName: {lastname}, phoneNumber: {phonenumber}, email: {email}, role: {role}') + + # Prepare the Base query + query = "SELECT * FROM users WHERE 1=1" + params = [] + + # Add filters if provided + if firstname: + query += " AND firstName = %s" + params.append(firstname) + if lastname: + query += " AND lastName = %s" + params.append(lastname) + if phonenumber: + query += " AND phoneNumber = %s" + params.append(phonenumber) + if email: + query += " AND email = %s" + params.append(email) + if role: + query += " AND role = %s" + params.append(role) + + current_app.logger.debug(f'Executing query: {query} with params: {params}') + cursor.execute(query, params) + results = cursor.fetchall() + + # Get column names to map to dictionaries + columns = [col[0] for col in cursor.description] + users = [dict(zip(columns, row)) for row in results] + cursor.close() + + current_app.logger.info(f'Successfully retrieved {len(users)} users') + return jsonify(users), 200 + except Error as e: + current_app.logger.error(f'Database error in get_all_users: {str(e)}') + return jsonify({"error": str(e)}), 500 + +@users.route("/get_user/", methods=["GET"]) +def get_user(user_id): + try: + cursor = db.get_db().cursor() + + cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,)) + user_row = cursor.fetchone() + + if not user_row: + return jsonify({"error": "user not found"}), 404 + + columns = [col[0] for col in cursor.description] + user = dict(zip(columns, user_row)) + + cursor.close() + return jsonify(user), 200 + + except Error as e: + return jsonify({"error": str(e)}), 500 + +@users.route("/create_users", methods=["POST"]) +def create_user(): + try: + data = request.get_json() + + # Validate required fields + required_fields = ["firstName", "lastName", "email", "role"] + for field in required_fields: + if field not in data: + return jsonify({"error": f"Missing required field: {field}"}), 400 + + cursor = db.get_db().cursor() + + query = """ + INSERT INTO users (firstName, middleName, lastName, phoneNumber, email, role, planType, manages) + VALUES (%s, %s, %s, %s, %s, %s, %s, %s) + """ + cursor.execute( + query, + ( + data["firstName"], + data.get["middleName"], + data["lastName"], + data["phoneNumber"], + data["email"], + data["role"], + data.get("planType", "plan_name"), + data.get("manages") + ), + ) + + db.get_db().commit() + new_user_id = cursor.lastrowid + cursor.close() + + return ( + jsonify({"message": "user created successfully", "user_id": new_user_id}), + 201, + ) + except Error as e: + return jsonify({"error": str(e)}), 500 + +@users.route("/delete_users/", methods = ["DELETE"]) +def delete_user(user_id): + try: + cursor = db.get_db().cursor() + + cursor.execute("SELECT * FROM users WHERE id = %s", (user_id,)) + user = cursor.fetchone() + if not user: + return jsonify({"error": "user not found"}), 404 + + cursor.execute("DELETE FROM users WHERE id = %s", (user_id,)) + db.get_db().commit() + cursor.close() + + return jsonify({"message": "user deleted successfully"}), 200 + except Error as e: + return jsonify({"error": str(e)}), 500 + +@users.route("/user_data/", methods=["GET"]) +def get_user_data(user_data_id): + try: + cursor = db.get_db().cursor() + + cursor.execute("SELECT * FROM user_data WHERE id = %s", (user_data_id,)) + user_row = cursor.fetchone() + + if not user_row: + return jsonify({"error": "user not found"}), 404 + + columns = [col[0] for col in cursor.description] + user_data = dict(zip(columns, user_row)) + + cursor.close() + return jsonify(user_data), 200 + + except Error as e: + return jsonify({"error": str(e)}), 500 + diff --git a/app/Dockerfile b/app/Dockerfile index 6eb11bff2e..c4f7981358 100644 --- a/app/Dockerfile +++ b/app/Dockerfile @@ -5,7 +5,6 @@ WORKDIR /appcode RUN apt-get update && apt-get install -y \ build-essential \ curl \ - software-properties-common \ git \ && rm -rf /var/lib/apt/lists/* diff --git a/app/src/Home.py b/app/src/Home.py index abe97588aa..2fd11aa661 100644 --- a/app/src/Home.py +++ b/app/src/Home.py @@ -1,79 +1,92 @@ -################################################## -# This is the main/entry-point file for the -# sample application for your project -################################################## - -# Set up basic logging infrastructure import logging logging.basicConfig(format='%(filename)s:%(lineno)s:%(levelname)s -- %(message)s', level=logging.INFO) logger = logging.getLogger(__name__) - -# import the main streamlit library as well -# as SideBarLinks function from src/modules folder -import streamlit as st from modules.nav import SideBarLinks +import streamlit as st +import requests -# streamlit supports reguarl and wide layout (how the controls -# are organized/displayed on the screen). -st.set_page_config(layout = 'wide') - -# If a user is at this page, we assume they are not -# authenticated. So we change the 'authenticated' value -# in the streamlit session_state to false. +st.set_page_config(layout='wide') st.session_state['authenticated'] = False - -# Use the SideBarLinks function from src/modules/nav.py to control -# the links displayed on the left-side panel. -# IMPORTANT: ensure src/.streamlit/config.toml sets -# showSidebarNavigation = false in the [client] section SideBarLinks(show_home=True) -# *************************************************** -# The major content of this page -# *************************************************** +# Header with better styling +st.markdown("# 🎯 Welcome to GoalFlow!") +st.markdown("### *What are we going to get done today?*") +st.write("") + +# Center the user selection +col1, col2, col3 = st.columns([1, 2, 1]) + +with col2: + st.markdown("## 👥 User Profiles") + st.write("*Choose your role to get started*") + st.write("") + + # <<< Added: mock user ID mapping >>> + MOCK_USER_IDS = { + "avery": 1, + "dr_alan": 2, + "jose": 3, + "jack": 4 +} + + # Profile buttons with emojis and descriptions + if st.button('🎨 Avery - Freelance Designer', + type='primary', + use_container_width=True, + help="Manage creative projects and client work"): + + # Set mock user ID for Avery + st.session_state['authenticated'] = True + st.session_state['user_id'] = MOCK_USER_IDS["avery"] + + st.switch_page('pages/AveryHomePage.py') + + st.write("") + + -# set the title of the page and provide a simple prompt. -logger.info("Loading the Home page of the app") -st.title('CS 3200 Project Template') -st.write('\n\n') -# st.write('### Overview:') -# st.write('\n') -st.write('#### HI! As which user would you like to log in?') + if st.button('📚 Dr. Alan - Math Professor', + type='primary', + use_container_width=True, + help="Research projects and academic tasks"): + + # Set mock user ID for Dr. Alan + st.session_state['authenticated'] = True + st.session_state['user_id'] = MOCK_USER_IDS["dr_alan"] -# For each of the user personas for which we are implementing -# functionality, we put a button on the screen that the user -# can click to MIMIC logging in as that mock user. + st.switch_page('pages/Dr.AlanHomePage.py') + + st.write("") + -if st.button("Act as John, a Political Strategy Advisor", - type = 'primary', - use_container_width=True): - # when user clicks the button, they are now considered authenticated - st.session_state['authenticated'] = True - # we set the role of the current user - st.session_state['role'] = 'pol_strat_advisor' - # we add the first name of the user (so it can be displayed on - # subsequent pages). - st.session_state['first_name'] = 'John' - # finally, we ask streamlit to switch to another page, in this case, the - # landing page for this particular user type - logger.info("Logging in as Political Strategy Advisor Persona") - st.switch_page('pages/00_Pol_Strat_Home.py') -if st.button('Act as Mohammad, an USAID worker', - type = 'primary', - use_container_width=True): - st.session_state['authenticated'] = True - st.session_state['role'] = 'usaid_worker' - st.session_state['first_name'] = 'Mohammad' - st.switch_page('pages/10_USAID_Worker_Home.py') + if st.button('🛠️ Jose - System Admin', + type='primary', + use_container_width=True, + help="Manage app development and user support"): + + # Set mock user ID for Jose + st.session_state['authenticated'] = True + st.session_state['user_id'] = MOCK_USER_IDS["jose"] -if st.button('Act as System Administrator', - type = 'primary', - use_container_width=True): - st.session_state['authenticated'] = True - st.session_state['role'] = 'administrator' - st.session_state['first_name'] = 'SysAdmin' - st.switch_page('pages/20_Admin_Home.py') + st.switch_page('pages/JoseHomePage.py') + + st.write("") + + if st.button('💼 Jack - Financial Analyst', + type='primary', + use_container_width=True, + help="Company goals and financial metrics"): + + # Set mock user ID for Jack + st.session_state['authenticated'] = True + st.session_state['user_id'] = MOCK_USER_IDS["jack"] + st.switch_page('pages/JackHomePage.py') + + st.write("---") + + \ No newline at end of file diff --git a/app/src/modules/nav.py b/app/src/modules/nav.py index 8dae4b5710..829902efd4 100644 --- a/app/src/modules/nav.py +++ b/app/src/modules/nav.py @@ -1,114 +1,91 @@ +# modules/nav.py # Idea borrowed from https://github.com/fsmosca/sample-streamlit-authenticator - # This file has function to add certain functionality to the left side bar of the app import streamlit as st - #### ------------------------ General ------------------------ def HomeNav(): st.sidebar.page_link("Home.py", label="Home", icon="🏠") - def AboutPageNav(): - st.sidebar.page_link("pages/30_About.py", label="About", icon="🧠") - + st.sidebar.page_link("pages/About.py", label="About", icon="🧠") #### ------------------------ Examples for Role of pol_strat_advisor ------------------------ def PolStratAdvHomeNav(): - st.sidebar.page_link( - "pages/00_Pol_Strat_Home.py", label="Political Strategist Home", icon="👤" - ) - + st.sidebar.page_link("pages/00_Pol_Strat_Home.py", label="Political Strategist Home", icon="👤") def WorldBankVizNav(): - st.sidebar.page_link( - "pages/01_World_Bank_Viz.py", label="World Bank Visualization", icon="🏦" - ) - + st.sidebar.page_link("pages/01_World_Bank_Viz.py", label="World Bank Visualization", icon="🏦") def MapDemoNav(): st.sidebar.page_link("pages/02_Map_Demo.py", label="Map Demonstration", icon="🗺️") - ## ------------------------ Examples for Role of usaid_worker ------------------------ def ApiTestNav(): st.sidebar.page_link("pages/12_API_Test.py", label="Test the API", icon="🛜") - def PredictionNav(): - st.sidebar.page_link( - "pages/11_Prediction.py", label="Regression Prediction", icon="📈" - ) - + st.sidebar.page_link("pages/11_Prediction.py", label="Regression Prediction", icon="📈") def ClassificationNav(): - st.sidebar.page_link( - "pages/13_Classification.py", label="Classification Demo", icon="🌺" - ) - + st.sidebar.page_link("pages/13_Classification.py", label="Classification Demo", icon="🌺") def NgoDirectoryNav(): st.sidebar.page_link("pages/14_NGO_Directory.py", label="NGO Directory", icon="📁") - def AddNgoNav(): st.sidebar.page_link("pages/15_Add_NGO.py", label="Add New NGO", icon="➕") - #### ------------------------ System Admin Role ------------------------ def AdminPageNav(): st.sidebar.page_link("pages/20_Admin_Home.py", label="System Admin", icon="🖥️") - st.sidebar.page_link( - "pages/21_ML_Model_Mgmt.py", label="ML Model Management", icon="🏢" - ) - + st.sidebar.page_link("pages/21_ML_Model_Mgmt.py", label="ML Model Management", icon="🏢") # --------------------------------Links Function ----------------------------------------------- -def SideBarLinks(show_home=False): +def SideBarLinks(show_home: bool = False): """ - This function handles adding links to the sidebar of the app based upon the logged-in user's role, which was put in the streamlit session_state object when logging in. + Adds links to the sidebar based on the logged-in user's role. + Uses safe defaults so missing session keys never crash the app. """ - # add a logo to the sidebar always - st.sidebar.image("assets/logo.png", width=150) + # ---- Safe defaults (중요) ---- + st.session_state.setdefault("authenticated", False) + st.session_state.setdefault("role", "guest") + st.session_state.setdefault("user_id", None) + + role = st.session_state.get("role", "guest") + is_auth = bool(st.session_state.get("authenticated", False)) - # If there is no logged in user, redirect to the Home (Landing) page - if "authenticated" not in st.session_state: - st.session_state.authenticated = False - st.switch_page("Home.py") + #st.sidebar.image("assets/logo.png", width=150) if show_home: - # Show the Home page link (the landing page) HomeNav() - # Show the other page navigators depending on the users' role. - if st.session_state["authenticated"]: - - # Show World Bank Link and Map Demo Link if the user is a political strategy advisor role. - if st.session_state["role"] == "pol_strat_advisor": + # Role-based links + if is_auth: + if role == "pol_strat_advisor": PolStratAdvHomeNav() WorldBankVizNav() MapDemoNav() - # If the user role is usaid worker, show the Api Testing page - if st.session_state["role"] == "usaid_worker": + if role == "usaid_worker": PredictionNav() ApiTestNav() ClassificationNav() NgoDirectoryNav() AddNgoNav() - # If the user is an administrator, give them access to the administrator pages - if st.session_state["role"] == "administrator": + if role == "administrator": AdminPageNav() - # Always show the About page at the bottom of the list of links + # Always show About AboutPageNav() - if st.session_state["authenticated"]: - # Always show a logout button if there is a logged in user + # Logout + if is_auth: if st.sidebar.button("Logout"): - del st.session_state["role"] - del st.session_state["authenticated"] + for k in ("role", "authenticated", "user_id"): + if k in st.session_state: + del st.session_state[k] st.switch_page("Home.py") diff --git a/app/src/pages/00_Pol_Strat_Home.py b/app/src/pages/00_Pol_Strat_Home.py deleted file mode 100644 index 3d02f25552..0000000000 --- a/app/src/pages/00_Pol_Strat_Home.py +++ /dev/null @@ -1,25 +0,0 @@ -import logging -logger = logging.getLogger(__name__) - -import streamlit as st -from modules.nav import SideBarLinks - -st.set_page_config(layout = 'wide') - -# Show appropriate sidebar links for the role of the currently logged in user -SideBarLinks() - -st.title(f"Welcome Political Strategist, {st.session_state['first_name']}.") -st.write('') -st.write('') -st.write('### What would you like to do today?') - -if st.button('View World Bank Data Visualization', - type='primary', - use_container_width=True): - st.switch_page('pages/01_World_Bank_Viz.py') - -if st.button('View World Map Demo', - type='primary', - use_container_width=True): - st.switch_page('pages/02_Map_Demo.py') \ No newline at end of file diff --git a/app/src/pages/01_World_Bank_Viz.py b/app/src/pages/01_World_Bank_Viz.py deleted file mode 100644 index a34cbb1529..0000000000 --- a/app/src/pages/01_World_Bank_Viz.py +++ /dev/null @@ -1,41 +0,0 @@ -import logging -logger = logging.getLogger(__name__) -import pandas as pd -import streamlit as st -from streamlit_extras.app_logo import add_logo -import world_bank_data as wb -import matplotlib.pyplot as plt -import numpy as np -import plotly.express as px -from modules.nav import SideBarLinks - -# Call the SideBarLinks from the nav module in the modules directory -SideBarLinks() - -# set the header of the page -st.header('World Bank Data') - -# You can access the session state to make a more customized/personalized app experience -st.write(f"### Hi, {st.session_state['first_name']}.") - -# get the countries from the world bank data -with st.echo(code_location='above'): - countries:pd.DataFrame = wb.get_countries() - - st.dataframe(countries) - -# the with statment shows the code for this block above it -with st.echo(code_location='above'): - arr = np.random.normal(1, 1, size=100) - test_plot, ax = plt.subplots() - ax.hist(arr, bins=20) - - st.pyplot(test_plot) - - -with st.echo(code_location='above'): - slim_countries = countries[countries['incomeLevel'] != 'Aggregates'] - data_crosstab = pd.crosstab(slim_countries['region'], - slim_countries['incomeLevel'], - margins = False) - st.table(data_crosstab) diff --git a/app/src/pages/02_Map_Demo.py b/app/src/pages/02_Map_Demo.py deleted file mode 100644 index 5ca09a9633..0000000000 --- a/app/src/pages/02_Map_Demo.py +++ /dev/null @@ -1,104 +0,0 @@ -import logging -logger = logging.getLogger(__name__) -import streamlit as st -from streamlit_extras.app_logo import add_logo -import pandas as pd -import pydeck as pdk -from urllib.error import URLError -from modules.nav import SideBarLinks - -SideBarLinks() - -# add the logo -add_logo("assets/logo.png", height=400) - -# set up the page -st.markdown("# Mapping Demo") -st.sidebar.header("Mapping Demo") -st.write( - """This Mapping Demo is from the Streamlit Documentation. It shows how to use -[`st.pydeck_chart`](https://docs.streamlit.io/library/api-reference/charts/st.pydeck_chart) -to display geospatial data.""" -) - - -@st.cache_data -def from_data_file(filename): - url = ( - "http://raw.githubusercontent.com/streamlit/" - "example-data/master/hello/v1/%s" % filename - ) - return pd.read_json(url) - - -try: - ALL_LAYERS = { - "Bike Rentals": pdk.Layer( - "HexagonLayer", - data=from_data_file("bike_rental_stats.json"), - get_position=["lon", "lat"], - radius=200, - elevation_scale=4, - elevation_range=[0, 1000], - extruded=True, - ), - "Bart Stop Exits": pdk.Layer( - "ScatterplotLayer", - data=from_data_file("bart_stop_stats.json"), - get_position=["lon", "lat"], - get_color=[200, 30, 0, 160], - get_radius="[exits]", - radius_scale=0.05, - ), - "Bart Stop Names": pdk.Layer( - "TextLayer", - data=from_data_file("bart_stop_stats.json"), - get_position=["lon", "lat"], - get_text="name", - get_color=[0, 0, 0, 200], - get_size=15, - get_alignment_baseline="'bottom'", - ), - "Outbound Flow": pdk.Layer( - "ArcLayer", - data=from_data_file("bart_path_stats.json"), - get_source_position=["lon", "lat"], - get_target_position=["lon2", "lat2"], - get_source_color=[200, 30, 0, 160], - get_target_color=[200, 30, 0, 160], - auto_highlight=True, - width_scale=0.0001, - get_width="outbound", - width_min_pixels=3, - width_max_pixels=30, - ), - } - st.sidebar.markdown("### Map Layers") - selected_layers = [ - layer - for layer_name, layer in ALL_LAYERS.items() - if st.sidebar.checkbox(layer_name, True) - ] - if selected_layers: - st.pydeck_chart( - pdk.Deck( - map_style="mapbox://styles/mapbox/light-v9", - initial_view_state={ - "latitude": 37.76, - "longitude": -122.4, - "zoom": 11, - "pitch": 50, - }, - layers=selected_layers, - ) - ) - else: - st.error("Please choose at least one layer above.") -except URLError as e: - st.error( - """ - **This demo requires internet access.** - Connection error: %s - """ - % e.reason - ) diff --git a/app/src/pages/10_USAID_Worker_Home.py b/app/src/pages/10_USAID_Worker_Home.py deleted file mode 100644 index 5b114865f8..0000000000 --- a/app/src/pages/10_USAID_Worker_Home.py +++ /dev/null @@ -1,31 +0,0 @@ -import logging -logger = logging.getLogger(__name__) - -import streamlit as st -from modules.nav import SideBarLinks - -st.set_page_config(layout = 'wide') - -# Show appropriate sidebar links for the role of the currently logged in user -SideBarLinks() - -st.title(f"Welcome USAID Worker, {st.session_state['first_name']}.") -st.write('') -st.write('') -st.write('### What would you like to do today?') - -if st.button('Predict Value Based on Regression Model', - type='primary', - use_container_width=True): - st.switch_page('pages/11_Prediction.py') - -if st.button('View the Simple API Demo', - type='primary', - use_container_width=True): - st.switch_page('pages/12_API_Test.py') - -if st.button("View Classification Demo", - type='primary', - use_container_width=True): - st.switch_page('pages/13_Classification.py') - \ No newline at end of file diff --git a/app/src/pages/11_Prediction.py b/app/src/pages/11_Prediction.py deleted file mode 100644 index ab470d65e6..0000000000 --- a/app/src/pages/11_Prediction.py +++ /dev/null @@ -1,35 +0,0 @@ -import logging - -logger = logging.getLogger(__name__) - -import streamlit as st -from modules.nav import SideBarLinks -import requests - -st.set_page_config(layout="wide") - -# Display the appropriate sidebar links for the role of the logged in user -SideBarLinks() - -st.title("Prediction with Regression") - -# create a 2 column layout -col1, col2 = st.columns(2) - -# add one number input for variable 1 into column 1 -with col1: - var_01 = st.number_input("Variable 01:", step=1) - -# add another number input for variable 2 into column 2 -with col2: - var_02 = st.number_input("Variable 02:", step=1) - -logger.info(f"var_01 = {var_01}") -logger.info(f"var_02 = {var_02}") - -# add a button to use the values entered into the number field to send to the -# prediction function via the REST API -if st.button("Calculate Prediction", type="primary", use_container_width=True): - results = requests.get(f"http://web-api:4000/prediction/{var_01}/{var_02}") - json_results = results.json() - st.dataframe(json_results) diff --git a/app/src/pages/12_API_Test.py b/app/src/pages/12_API_Test.py deleted file mode 100644 index cbf17b4cda..0000000000 --- a/app/src/pages/12_API_Test.py +++ /dev/null @@ -1,25 +0,0 @@ -import logging -logger = logging.getLogger(__name__) -import streamlit as st -import requests -from streamlit_extras.app_logo import add_logo -from modules.nav import SideBarLinks - -SideBarLinks() - -st.write("# Accessing a REST API from Within Streamlit") -""" -Simply retrieving data from a REST api running in a separate Docker Container. - -If the container isn't running, this will be very unhappy. But the Streamlit app -should not totally die. -""" - -data = {} -try: - data = requests.get('http://web-api:4000/data').json() -except: - st.write("**Important**: Could not connect to sample api, so using dummy data.") - data = {"a":{"b": "123", "c": "hello"}, "z": {"b": "456", "c": "goodbye"}} - -st.dataframe(data) diff --git a/app/src/pages/13_Classification.py b/app/src/pages/13_Classification.py deleted file mode 100644 index 4f4677a7cd..0000000000 --- a/app/src/pages/13_Classification.py +++ /dev/null @@ -1,64 +0,0 @@ -import logging -logger = logging.getLogger(__name__) -import streamlit as st -import pandas as pd -from sklearn import datasets -from sklearn.ensemble import RandomForestClassifier -from streamlit_extras.app_logo import add_logo -from modules.nav import SideBarLinks - -SideBarLinks() - -st.write(""" -# Simple Iris Flower Prediction App - -This example is borrowed from [The Data Professor](https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_7_classification_iris) - -This app predicts the **Iris flower** type! -""") - -st.sidebar.header('User Input Parameters') - -# Below, different user inputs are defined. When you view the UI, -# notice that they are in the sidebar. -def user_input_features(): - sepal_length = st.sidebar.slider('Sepal length', 4.3, 7.9, 5.4) - sepal_width = st.sidebar.slider('Sepal width', 2.0, 4.4, 3.4) - petal_length = st.sidebar.slider('Petal length', 1.0, 6.9, 1.3) - petal_width = st.sidebar.slider('Petal width', 0.1, 2.5, 0.2) - data = {'sepal_length': sepal_length, - 'sepal_width': sepal_width, - 'petal_length': petal_length, - 'petal_width': petal_width} - features = pd.DataFrame(data, index=[0]) - return features - -# get a data frame with the input features from the user -df = user_input_features() - -# show the exact values the user entered in a table. -st.subheader('User Input parameters') -st.write(df) - -# load the standard iris dataset and generate a -# random forest classifier -iris = datasets.load_iris() -X = iris.data -Y = iris.target -clf = RandomForestClassifier() - -# fit the model -clf.fit(X, Y) - -# use the values entered by the user for prediction -prediction = clf.predict(df) -prediction_proba = clf.predict_proba(df) - -st.subheader('Class labels and their corresponding index number') -st.write(iris.target_names) - -st.subheader('Prediction') -st.write(iris.target_names[prediction]) - -st.subheader('Prediction Probability') -st.write(prediction_proba) \ No newline at end of file diff --git a/app/src/pages/14_NGO_Directory.py b/app/src/pages/14_NGO_Directory.py deleted file mode 100644 index ac8cd2780b..0000000000 --- a/app/src/pages/14_NGO_Directory.py +++ /dev/null @@ -1,83 +0,0 @@ -import streamlit as st -import requests -from streamlit_extras.app_logo import add_logo -from modules.nav import SideBarLinks - -# Initialize sidebar -SideBarLinks() - -st.title("NGO Directory") - -# API endpoint -API_URL = "http://web-api:4000/ngo/ngos" - -# Create filter columns -col1, col2, col3 = st.columns(3) - -# Get unique values for filters from the API -try: - response = requests.get(API_URL) - if response.status_code == 200: - ngos = response.json() - - # Extract unique values for filters - countries = sorted(list(set(ngo["Country"] for ngo in ngos))) - focus_areas = sorted(list(set(ngo["Focus_Area"] for ngo in ngos))) - founding_years = sorted(list(set(ngo["Founding_Year"] for ngo in ngos))) - - # Create filters - with col1: - selected_country = st.selectbox("Filter by Country", ["All"] + countries) - - with col2: - selected_focus = st.selectbox("Filter by Focus Area", ["All"] + focus_areas) - - with col3: - selected_year = st.selectbox( - "Filter by Founding Year", - ["All"] + [str(year) for year in founding_years], - ) - - # Build query parameters - params = {} - if selected_country != "All": - params["country"] = selected_country - if selected_focus != "All": - params["focus_area"] = selected_focus - if selected_year != "All": - params["founding_year"] = selected_year - - # Get filtered data - filtered_response = requests.get(API_URL, params=params) - if filtered_response.status_code == 200: - filtered_ngos = filtered_response.json() - - # Display results count - st.write(f"Found {len(filtered_ngos)} NGOs") - - # Create expandable rows for each NGO - for ngo in filtered_ngos: - with st.expander(f"{ngo['Name']} ({ngo['Country']})"): - col1, col2 = st.columns(2) - - with col1: - st.write("**Basic Information**") - st.write(f"**Country:** {ngo['Country']}") - st.write(f"**Founded:** {ngo['Founding_Year']}") - st.write(f"**Focus Area:** {ngo['Focus_Area']}") - - with col2: - st.write("**Contact Information**") - st.write(f"**Website:** [{ngo['Website']}]({ngo['Website']})") - - # Add a button to view full profile - if st.button(f"View Full Profile", key=f"view_{ngo['NGO_ID']}"): - st.session_state["selected_ngo_id"] = ngo["NGO_ID"] - st.switch_page("pages/16_NGO_Profile.py") - - else: - st.error("Failed to fetch NGO data from the API") - -except requests.exceptions.RequestException as e: - st.error(f"Error connecting to the API: {str(e)}") - st.info("Please ensure the API server is running on http://web-api:4000") diff --git a/app/src/pages/15_Add_NGO.py b/app/src/pages/15_Add_NGO.py deleted file mode 100644 index d9decc6e3d..0000000000 --- a/app/src/pages/15_Add_NGO.py +++ /dev/null @@ -1,63 +0,0 @@ -import streamlit as st -import requests -from streamlit_extras.app_logo import add_logo -from modules.nav import SideBarLinks - -# Initialize sidebar -SideBarLinks() - -st.title("Add New NGO") - -# API endpoint -API_URL = "http://web-api:4000/ngo/ngos" - -# Create a form for NGO details -with st.form("add_ngo_form"): - st.subheader("NGO Information") - - # Required fields - name = st.text_input("Organization Name *") - country = st.text_input("Country *") - founding_year = st.number_input( - "Founding Year *", min_value=1800, max_value=2024, value=2024 - ) - focus_area = st.text_input("Focus Area *") - website = st.text_input("Website URL *") - - # Form submission button - submitted = st.form_submit_button("Add NGO") - - if submitted: - # Validate required fields - if not all([name, country, founding_year, focus_area, website]): - st.error("Please fill in all required fields marked with *") - else: - # Prepare the data for API - ngo_data = { - "Name": name, - "Country": country, - "Founding_Year": int(founding_year), - "Focus_Area": focus_area, - "Website": website, - } - - try: - # Send POST request to API - response = requests.post(API_URL, json=ngo_data) - - if response.status_code == 201: - st.success("NGO added successfully!") - # Clear the form - st.rerun() - else: - st.error( - f"Failed to add NGO: {response.json().get('error', 'Unknown error')}" - ) - - except requests.exceptions.RequestException as e: - st.error(f"Error connecting to the API: {str(e)}") - st.info("Please ensure the API server is running") - -# Add a button to return to the NGO Directory -if st.button("Return to NGO Directory"): - st.switch_page("pages/14_NGO_Directory.py") diff --git a/app/src/pages/16_NGO_Profile.py b/app/src/pages/16_NGO_Profile.py deleted file mode 100644 index ae15f44745..0000000000 --- a/app/src/pages/16_NGO_Profile.py +++ /dev/null @@ -1,87 +0,0 @@ -import streamlit as st -import requests -from streamlit_extras.app_logo import add_logo -from modules.nav import SideBarLinks - -# Initialize sidebar -SideBarLinks() - -st.title("NGO Profile") - -# Get NGO ID from session state -ngo_id = st.session_state.get("selected_ngo_id") - -if ngo_id is None: - st.error("No NGO selected") - st.button( - "Return to NGO Directory", - on_click=lambda: st.switch_page("pages/14_NGO_Directory.py"), - ) -else: - # API endpoint - API_URL = f"http://web-api:4000/ngo/ngos/{ngo_id}" - - try: - # Fetch NGO details - response = requests.get(API_URL) - - if response.status_code == 200: - ngo = response.json() - - # Display basic information - st.header(ngo["Name"]) - - col1, col2 = st.columns(2) - - with col1: - st.subheader("Basic Information") - st.write(f"**Country:** {ngo['Country']}") - st.write(f"**Founded:** {ngo['Founding_Year']}") - st.write(f"**Focus Area:** {ngo['Focus_Area']}") - st.write(f"**Website:** [{ngo['Website']}]({ngo['Website']})") - - # Display projects - if ngo.get("projects"): - st.subheader("Projects") - for project in ngo["projects"]: - with st.expander( - f"{project['Project_Name']} ({project['Focus_Area']})" - ): - budget = float(project["Budget"]) if project["Budget"] else 0.0 - st.write(f"**Budget:** ${budget:,.2f}") - st.write(f"**Start Date:** {project['Start_Date']}") - st.write(f"**End Date:** {project['End_Date']}") - else: - st.info("No projects found for this NGO") - - # Display donors - if ngo.get("donors"): - st.subheader("Donors") - for donor in ngo["donors"]: - with st.expander(f"{donor['Donor_Name']} ({donor['Donor_Type']})"): - donation = ( - float(donor["Donation_Amount"]) - if donor["Donation_Amount"] - else 0.0 - ) - st.write(f"**Donation Amount:** ${donation:,.2f}") - else: - st.info("No donors found for this NGO") - - elif response.status_code == 404: - st.error("NGO not found") - else: - st.error( - f"Error fetching NGO data: {response.json().get('error', 'Unknown error')}" - ) - - except requests.exceptions.RequestException as e: - st.error(f"Error connecting to the API: {str(e)}") - st.info("Please ensure the API server is running") - -# Add a button to return to the NGO Directory -if st.button("Return to NGO Directory"): - # Clear the selected NGO ID from session state - if "selected_ngo_id" in st.session_state: - del st.session_state["selected_ngo_id"] - st.switch_page("pages/14_NGO_Directory.py") diff --git a/app/src/pages/20_Admin_Home.py b/app/src/pages/20_Admin_Home.py deleted file mode 100644 index 0dbd0f36b4..0000000000 --- a/app/src/pages/20_Admin_Home.py +++ /dev/null @@ -1,17 +0,0 @@ -import logging -logger = logging.getLogger(__name__) - -import streamlit as st -from modules.nav import SideBarLinks -import requests - -st.set_page_config(layout = 'wide') - -SideBarLinks() - -st.title('System Admin Home Page') - -if st.button('Update ML Models', - type='primary', - use_container_width=True): - st.switch_page('pages/21_ML_Model_Mgmt.py') \ No newline at end of file diff --git a/app/src/pages/21_ML_Model_Mgmt.py b/app/src/pages/21_ML_Model_Mgmt.py deleted file mode 100644 index 6744e9274e..0000000000 --- a/app/src/pages/21_ML_Model_Mgmt.py +++ /dev/null @@ -1,28 +0,0 @@ -import logging -logger = logging.getLogger(__name__) -import streamlit as st -from modules.nav import SideBarLinks -import requests - -st.set_page_config(layout = 'wide') - -SideBarLinks() - -st.title('App Administration Page') - -st.write('\n\n') -st.write('## Model 1 Maintenance') - -st.button("Train Model 01", - type = 'primary', - use_container_width=True) - -st.button('Test Model 01', - type = 'primary', - use_container_width=True) - -if st.button('Model 1 - get predicted value for 10, 25', - type = 'primary', - use_container_width=True): - results = requests.get('http://web-api:4000/prediction/10/25').json() - st.dataframe(results) diff --git a/app/src/pages/30_About.py b/app/src/pages/30_About.py deleted file mode 100644 index cec979639d..0000000000 --- a/app/src/pages/30_About.py +++ /dev/null @@ -1,22 +0,0 @@ -import streamlit as st -from streamlit_extras.app_logo import add_logo -from modules.nav import SideBarLinks - -SideBarLinks() - -st.write("# About this App") - -st.markdown( - """ - This is a demo app for Data and Software in International Government and Politics Dialogue 2025 Project Course. - - The goal of this demo is to provide information on the tech stack - being used as well as demo some of the features of the various platforms. - - Stay tuned for more information and features to come! - """ -) - -# Add a button to return to home page -if st.button("Return to Home", type="primary"): - st.switch_page("Home.py") diff --git a/app/src/pages/About.py b/app/src/pages/About.py new file mode 100644 index 0000000000..1bcf1b3e62 --- /dev/null +++ b/app/src/pages/About.py @@ -0,0 +1,22 @@ +import streamlit as st +from streamlit_extras.app_logo import add_logo +from modules.nav import SideBarLinks + +SideBarLinks() + +st.write("# About this App") + +st.markdown( + """ + This is a demo app to demonstrate the data and software of GoalFlow in the 2025 SU2 Intro to Databases Project. + + The goal of this demo is to show the capabilities of the tech stack + being used as well as demo some of the features of the platforms. + + Stay tuned for more information and features! + """ +) + +# Add a button to return to home page +if st.button("Return to Home", type="primary"): + st.switch_page("Home.py") diff --git a/app/src/pages/Add_New_Project.py b/app/src/pages/Add_New_Project.py new file mode 100644 index 0000000000..86527639f9 --- /dev/null +++ b/app/src/pages/Add_New_Project.py @@ -0,0 +1,45 @@ +import streamlit as st +import requests + +from modules.nav import SideBarLinks +SideBarLinks(show_home=True) + +c1, c2 = st.columns([4, 1]) +with c1: + st.title("Add New Project") +with c2: + if st.button("Homepage", help="Return Home", type="primary", use_container_width=True): + st.switch_page('Home.py') + + +# Form inputs +userID = st.text_input(f"User ID :red[*]") +title = st.text_input("Title :red[*]") +schedule = st.text_input("Deadline (YYYY-MM-DD) :red[*]") +tagID = st.text_input("Tag ID") +notes = st.text_area("Notes") +status = st.selectbox("Status", ['ON ICE', 'PLANNED', 'ACTIVE', 'ARCHIVED']) +priority = st.slider("Priority", 1, 4, 4) + + +if st.button("Submit"): + project_data = { + "userID": userID, + "tagID": tagID, + "title": title, + "notes": notes, + "status": status, + "priority": priority, + "schedule": schedule + } + + try: + # Replace this URL with your actual backend API URL + response = requests.post("http://web-api:4000/goals/create", json=project_data) + + if response.status_code == 200: + st.success("Project added successfully!") + else: + st.error(f"Failed to add project: {response.text}") + except Exception as e: + st.error(f"Error: {e}") diff --git a/app/src/pages/Archive.py b/app/src/pages/Archive.py new file mode 100644 index 0000000000..8c5e99f9d9 --- /dev/null +++ b/app/src/pages/Archive.py @@ -0,0 +1,117 @@ +import logging +import streamlit as st +import requests +from datetime import datetime, date, time + +logging.basicConfig( + format="%(filename)s:%(lineno)s:%(levelname)s -- %(message)s", + level=logging.INFO, +) +logger = logging.getLogger(__name__) + +API_URL = "http://web-api:4000" + +# -------------------------- +# helpers +# -------------------------- +def _parse_schedule(s: str | None): + if not s: + return None + for fmt in ( + "%a, %d %b %Y %H:%M:%S %Z", # "Fri, 15 Aug 2025 00:00:00 GMT" + "%Y-%m-%d", + "%Y-%m-%d %H:%M:%S", + ): + try: + return datetime.strptime(s, fmt).date() + except Exception: + continue + try: + return datetime.strptime(s.split(" GMT")[0], "%a, %d %b %Y %H:%M:%S").date() + except Exception: + return None + + +def _due_label(d): + if not d: + return "no time of completion" + days = (d - date.today()).days + prefix = "D-" if days >= 0 else "D+" + return f"{prefix}{abs(days)} · {d.strftime('%b %d, %Y')}" + +def fetch_active_goals(): + try: + r = requests.get(f"{API_URL}/goals/archive", timeout=5) + if r.status_code == 200: + return r.json() + st.error(f"Failed to load archived goals: {r.status_code}") + except Exception as e: + st.error(f"Error contacting API: {e}") + return [] + +def try_put(url: str): + try: + resp = requests.put(url, timeout=5, allow_redirects=False) + return resp.status_code, (resp.text or "") + except Exception as e: + return None, str(e) + +def get_first_bug_id(): + try: + r = requests.get(f"{API_URL}/support/bugs", timeout=5) + if r.status_code == 200: + data = r.json() + if isinstance(data, list) and data: + item = data[0] + if isinstance(item, dict): + return item.get("id") or item.get("bug_id") + return None + except Exception: + return None + +# -------------------------- +# UI +# -------------------------- +st.set_page_config(layout="wide", page_title="Avery — Home") + + + +col1, col2 = st.columns([2, 1]) + +# ========== LEFT: Active Projects + Archive ========== +with col1: + title_col1, title_col2 = st.columns([6,1]) + with title_col1: st.title("Archive") + with title_col2: + if st.button("Avery's Homepage", + type='primary', + help="Return to Avery's tasks"): + st.switch_page('pages/AveryHomePage.py') + st.write("---") + + goals = fetch_active_goals() + + if not goals: + st.info("No archived projects.") + else: + def _sort_key(g): + sched = _parse_schedule(g.get("completedAt")) + return (sched is None, sched or date.max, (g.get("title") or "").lower()) + + for g in sorted(goals, key=_sort_key): + gid = g.get("id") or g.get("goal_id") or g.get("goalsid") + title = g.get("title") or "Untitled" + notes = (g.get("notes") or "").strip() + sched = _parse_schedule(g.get("completedAt")) + comp_date = _due_label(sched) + + + col1, col2 = st.columns([3,1]) + with col1: + st.write(f"**{title}**") + if notes: + st.write(notes) + with col2: + st.write(f"archived at:") + st.write(comp_date) + st.write("---") diff --git a/app/src/pages/AveryHomePage.py b/app/src/pages/AveryHomePage.py new file mode 100644 index 0000000000..b391c3f197 --- /dev/null +++ b/app/src/pages/AveryHomePage.py @@ -0,0 +1,235 @@ +# app/pages/AveryHomePage.py + +import streamlit as st +st.set_page_config(layout="wide", page_title="Avery — Home") + +import logging +import requests +from datetime import datetime, date + +logging.basicConfig( + format="%(filename)s:%(lineno)s:%(levelname)s -- %(message)s", + level=logging.INFO, +) +logger = logging.getLogger(__name__) + +API_URL = "http://web-api:4000" + +# ---- Session defaults ---- +st.session_state.setdefault("authenticated", False) +st.session_state.setdefault("role", "guest") +st.session_state.setdefault("user_id", 1) + +# Sidebar: user switcher +st.sidebar.write("## User") +new_uid = st.sidebar.number_input("user_id", min_value=1, value=int(st.session_state["user_id"]), step=1) +if new_uid != st.session_state["user_id"]: + st.session_state["user_id"] = int(new_uid) + st.rerun() +user_id = int(st.session_state["user_id"]) + +# -------------------------- +# Helpers +# -------------------------- +def _parse_schedule(s: str | None): + if not s: + return None + for fmt in ( + "%a, %d %b %Y %H:%M:%S %Z", + "%Y-%m-%d", + "%Y-%m-%d %H:%M:%S", + ): + try: + return datetime.strptime(s, fmt).date() + except Exception: + continue + try: + return datetime.strptime(s.split(" GMT")[0], "%a, %d %b %Y %H:%M:%S").date() + except Exception: + return None + +def _due_label(d): + if not d: + return "No deadline" + days = (d - date.today()).days + prefix = "D-" if days >= 0 else "D+" + return f"{prefix}{abs(days)} · {d.strftime('%b %d, %Y')}" + +def _coerce_goals_list(data): + if isinstance(data, list): + return data + if isinstance(data, dict): + for key in ("goals", "items", "results", "data"): + if isinstance(data.get(key), list): + return data[key] + return [] + +def _get_json_list(url, tried): + try: + r = requests.get(url, timeout=5) + tried.append((url, r.status_code, r.text[:300])) + if r.status_code == 200: + return _coerce_goals_list(r.json()) + return [] + except Exception as e: + tried.append((url, "EXC", str(e))) + return [] + +def fetch_active_goals(uid: int): + tried = [] + base = f"{API_URL}/goals" + goals = _get_json_list(f"{base}/user/{uid}/active_and_priority", tried) + + # Normalize date-ish field into "schedule" + for g in goals: + if g.get("schedule") is None: + for k in ("due", "due_date", "deadline", "date", "scheduled_for"): + if k in g and g[k]: + g["schedule"] = g[k] + break + + st.session_state["__goals_debug__"] = tried + st.session_state["__goals_sample__"] = goals[:3] if isinstance(goals, list) else [] + return goals + +def scan_users_for_goals(start_uid=1, end_uid=20): + report = [] + for uid in range(start_uid, end_uid + 1): + tried = [] + items = _get_json_list(f"{API_URL}/goals/user/{uid}/active_and_priority", tried) + count = len(items) if isinstance(items, list) else 0 + report.append({"user_id": uid, "count": count}) + return report + +# -------------------------- +# UI +# -------------------------- +st.title("🎨 Avery — Projects") + +col1, col2 = st.columns([2, 1]) + +# LEFT: Active Projects + Archive +with col1: + title_col1, title_col2, title_col3, title_col4 = st.columns([4, 1, 1, 1]) + with title_col1: + st.write("### Active Projects") + with title_col2: + if st.button('Archive', type='primary', use_container_width=True, help="View archived tasks"): + st.switch_page('pages/Archive.py') + with title_col3: + if st.button("Homepage", type='primary', use_container_width=True, help="Return Home"): + st.switch_page('Home.py') + with title_col4: + if st.button("Add Goal", type='primary', use_container_width=True, help="Add Goal"): + st.switch_page('pages/Add_New_Project.py') + + goals = fetch_active_goals(user_id) + + st.write("---") + h1, h2, h3 = st.columns([2, 1, 1]) + h1.write("**Project**") + h2.write("**Due**") + h3.write("**Actions**") + st.write("---") + + if not goals: + st.info("No active projects for this user on /goals/user/{uid}/active_and_priority.") + st.caption("Use the sidebar to try a different user_id, or create a goal that is active & priority for this user.") + else: + def _sort_key(g): + sched = _parse_schedule(g.get("schedule")) + return (sched is None, sched or date.max, (g.get("title") or "").lower()) + + for g in sorted(goals, key=_sort_key): + gid = g.get("id") or g.get("goal_id") or g.get("goalsid") + title = g.get("title") or "Untitled" + notes = (g.get("notes") or "").strip() + sched = _parse_schedule(g.get("schedule")) + due_str = _due_label(sched) + + with st.container(): + c1, c2, c3 = st.columns([2, 1, 1]) + with c1: + st.write(f"**{title}**") + if notes: + st.write(notes) + with c2: + st.write(due_str) + with c3: + if st.button("Archive", key=f"archive_{gid}", use_container_width=True): + if gid is None: + st.error("Archive failed: missing goal id") + else: + resp = requests.put(f'{API_URL}/goals/{gid}/complete') + if resp.status_code == 200: + st.success("Archived.") + st.rerun() + else: + st.error(resp.status_code) + + if st.button("Delete", key=f"delete_{gid}", use_container_width=True): + if gid is None: + st.error("Delete failed: missing goal id") + else: + resp = requests.delete(f'{API_URL}/goals/{gid}/delete') + if resp.status_code == 200: + st.success("Deleted.") + st.rerun() + else: + st.error(resp.status_code) + + st.write("---") + +# RIGHT: Daily Log + Scanner +with col2: + st.write("### Daily Log") + + if "habit_logs" not in st.session_state: + st.session_state["habit_logs"] = [] + + with st.form("habit_form", clear_on_submit=True): + uid = st.text_input("User ID", value=str(user_id)) + title = st.text_input("Title") + notes = st.text_area("Notes") + submitted = st.form_submit_button("Log") + + if submitted: + payload = { + "uid": uid.strip(), + "title": title.strip(), + "notes": notes.strip() or None, + } + try: + resp = requests.post(f"{API_URL}/habits/create", json=payload, timeout=5) + if 200 <= resp.status_code < 300: + st.success("Logged.") + else: + st.session_state["habit_logs"].append(payload) + st.warning("Logged locally (server endpoint not ready).") + except Exception: + st.session_state["habit_logs"].append(payload) + st.warning("Logged locally (server unreachable).") + st.write(payload) + + if st.session_state["habit_logs"]: + st.write("Recent Logs") + for h in st.session_state["habit_logs"][-5:][::-1]: + st.markdown(f"- {h['title']}") + + st.write("---") + st.write("### Goal Scanner") + start_uid = st.number_input("Scan from user_id", min_value=1, value=1, step=1) + end_uid = st.number_input("Scan to user_id", min_value=start_uid, value=max(start_uid, 10), step=1) + if st.button("Scan"): + rows = scan_users_for_goals(start_uid, end_uid) + st.write(rows) + +# Debug expander +with st.expander("🔎 Debug — Goals API"): + tried = st.session_state.get("__goals_debug__", []) + sample = st.session_state.get("__goals_sample__", []) + st.write("Tried requests (url, status, preview):") + for url, status, preview in tried: + st.code(f"{url} -> {status}\n{preview}", language="text") + st.write("Sample parsed items (first 3):") + st.json(sample) diff --git a/app/src/pages/Delete_Project.py b/app/src/pages/Delete_Project.py new file mode 100644 index 0000000000..b5ce1c2dad --- /dev/null +++ b/app/src/pages/Delete_Project.py @@ -0,0 +1,170 @@ +import logging +logger = logging.getLogger(__name__) + +import streamlit as st +import requests +from modules.nav import SideBarLinks + +st.set_page_config(layout='wide') + +# SIDEBAR +SideBarLinks() + +# BACK +col_back, col_title = st.columns([1, 4]) +with col_back: + if st.button("← Back to Dashboard"): + st.switch_page('pages/Home.py') + +with col_title: + st.title("🗑️ Delete Project") + st.write("*Permanently remove a project from your goals*") + +st.write("") + +# Warning message +st.warning("⚠️ **Warning**: This action cannot be undone. The project will be permanently deleted from the database.") + +st.write("") + +# Main form +with st.form("delete_project_form"): + st.write("### Select Project to Delete") + + # SAMPLE PROJ DROPDOWN (FROM DB) + project_options = [ + "Select a project...", + "Increase Revenue by 5%", + "Complete Research Paper", + "App Feature Development", + "Statistical Analysis Study", + "Community Forum Enhancement", + "User Documentation Update" + ] + + selected_project = st.selectbox( + "Choose the project you want to delete:", + project_options, + help="Select from your active projects" + ) + + st.write("") + + # PROJ DETAILS (woulda be populated) + if selected_project != "Select a project...": + st.write("### Project Details") + + # Show project info in a nice container + with st.container(): + detail_col1, detail_col2 = st.columns(2) + + with detail_col1: + st.write("**Project Name:**") + st.write(selected_project) + st.write("") + st.write("**Current Status:**") + st.write("In Progress") + + with detail_col2: + st.write("**Created Date:**") + st.write("January 15, 2025") + st.write("") + st.write("**Progress:**") + st.progress(0.6) + st.write("60% Complete") + + st.write("") + + # CHECKBOX THING + confirm_delete = st.checkbox( + f"I understand that '{selected_project}' will be permanently deleted", + help="Check this box to confirm you want to delete this project" + ) + + st.write("") + + # DELETION REASON + deletion_reason = st.text_area( + "Reason for deletion (optional):", + placeholder=" Project cancelled, Duplicate entry, No longer relevant...", + help="This helps us improve the app" + ) + + else: + confirm_delete = False + deletion_reason = "" + + st.write("") + + # FORM SUBMISSION DETAILS + col1, col2, col3 = st.columns([1, 1, 1]) + + with col1: + cancel_button = st.form_submit_button( + "Cancel", + use_container_width=True + ) + + with col2: + # SPACING - MT + pass + + with col3: + delete_button = st.form_submit_button( + "🗑️ Delete Project", + type="primary", + use_container_width=True, + disabled=not confirm_delete or selected_project == "Select a project..." + ) + +# FOR HANDLING THE FORM SUBMISSION THING +if cancel_button: + st.info("Operation cancelled. Returning to dashboard...") + st.switch_page('pages/Home.py') + +if delete_button and confirm_delete and selected_project != "Select a project...": + # In a real app, you'd make an API call here to delete from database + # Example API call: + # try: + # response = requests.delete(f"http://api:4000/goals/{project_id}") + # if response.status_code == 200: + # st.success("Project deleted successfully!") + # else: + # st.error("Failed to delete project. Please try again.") + # except Exception as e: + # st.error(f"Error: {str(e)}") + + # For now, just show success message + st.success(f"✅ Project '{selected_project}' has been successfully deleted!") + st.balloons() + + # Log the deletion reason if provided + if deletion_reason: + logger.info(f"Project deleted: {selected_project}. Reason: {deletion_reason}") + + st.write("") + st.info("Redirecting to dashboard in 3 seconds...") + + # Auto-redirect after success + # st.rerun() + +# EXTRA INFO SEC +st.write("") +st.write("---") + +with st.expander("ℹ️ What happens when I delete a project?"): + st.write(""" + **When you delete a project:** + - The project is permanently removed from your active goals + - All associated tasks and subtasks are also deleted + - Progress data and history are permanently lost + - The project cannot be recovered after deletion + + **Alternatives to deletion:** + - Mark the project as 'Completed' instead + - Archive the project for future reference + - Put the project 'On Hold' temporarily + """) + +st.write("") +st.write("Need help? Contact support or return to your dashboard.") \ No newline at end of file diff --git a/app/src/pages/Dr.AlanHomePage.py b/app/src/pages/Dr.AlanHomePage.py new file mode 100644 index 0000000000..7852d402f5 --- /dev/null +++ b/app/src/pages/Dr.AlanHomePage.py @@ -0,0 +1,202 @@ +# ===== INITIAL IMPORTS ===== # +# app/pages/Dr.AlanHomePage.py +import logging + +logging.basicConfig( + format='%(filename)s:%(lineno)s:%(levelname)s -- %(message)s', + level=logging.INFO) +logger = logging.getLogger(__name__) + +from modules.nav import SideBarLinks +import streamlit as st +import requests +import pandas as pd +import plotly.express as px + + + + + +# ===== UI LAYOUT ===== # +# Left Side Bar that links abck to Home Page / About Page. +st.set_page_config(layout = 'wide') +st.session_state['authenticated'] = False +SideBarLinks(show_home=True) +# if 'authenticated' not in st.session_state: +# st.session_state['authenticated'] = False +# if "role" not in st.session_state: +# st.session_state["role"] = None # or a sensible default like "guest" + +# Header +st.title("📚 Good Morning, Dr. Alan!") +st.write("*Project Dashboard for Math Research*") + + + + + +# ===== MAIN LAYOUT ===== # +# Create main layout: left column (research projects) ((((and right column (quick actions + charts))))) +col1, col2 = st.columns([2, 1]) + +with col1: + st.write("### 🔬 ACTIVE RESEARCH PROJECTS") + + # --- Explicit mapping and display of Goals (Projects) --- + # Map only the attributes I want to display from the API response. + + # SETS THE USER_ID BASED UPON SESSION STATE. + user_id = st.session_state.get("user_id") + if not user_id: + st.error("No user ID found. Please log in or select a user profile.") + st.stop() + # user_id = 2 # Incase you refresh the page and it "logs you out" or something. + + # PROJECTS (AKA: GOALS) + # response = requests.get(f'http://web-api:4000/goals/user/{user_id}/active_and_priority', timeout=5) + # st.write("Status Code:", response.status_code) + # st.write("Raw JSON:", response.text) + + # # Now try to parse, letting any JSON errors bubble up + # projects = response.json() + + + # try: + # projects = requests.get(f'http://web-api:4000/goals/user/{user_id}/active_and_priority', timeout=5).json() + + # # # Convert tuples to dictionaries + # # columns = ["id", "title", "notes", "priority", "completed", "schedule"] + # # projects = [dict(zip(columns, item)) for item in raw_projects] + + # except Exception as e: + # st.error(f"Could not fetch projects: {e}") + # projects = [] + + try: + projects = requests.get(f'http://web-api:4000/goals/user/{user_id}/active_and_priority').json() + except Exception as e: + st.error(f"Could not fetch projects: {e}") + projects = [] + + projects = [ + [ + item.get("id"), # 0 - goal_id + item.get("title"), # 1 - title + item.get("notes"), # 2 - notes/description + item.get("priority"), # 3 - priority + item.get("completed"), # 4 - completed + ] + for item in projects + ] + + # HEADER + project_col1, project_col2, project_col3 = st.columns([2, 1, 1]) + with project_col1: + st.write("**Project**") + with project_col2: + st.write("**Priority**") + with project_col3: + # st.markdown("**Change Priority**") + st.markdown("**Completion &** \n**Change Priority**") + st.write("---") + + for project in projects: + project_id, title, notes, priority, completed = project + + with st.container(): + pc1, pc2, pc3 = st.columns([2, 1, 1]) + + # A. Project title + notes + with pc1: + st.write(f":red[**{title}**]") + st.write(notes) + + # B. Priority with color coding + with pc2: + p = priority + if p == "critical": + st.markdown(f":red[**🔴 Critical!**]") + elif p == "high": + st.markdown(f":orange[**🟠 High**]") + elif p == "medium": + st.markdown("🟡 Medium", unsafe_allow_html=True) + elif p == "low": + st.markdown(f":green[**🟢 Low**]") + else: + st.write(p) + + # C. Interactive priority dropdown + mark complete button + with pc3: + # 1. Map labels <--> values + label_to_val = { + "🔴 Critical": "critical", + "🟠 High": "high", + "🟡 Medium": "medium", + "🟢 Low": "low", + } + val_to_label = {v: k for k, v in label_to_val.items()} + + # 2. Preselect current priority + opts = list(label_to_val.keys()) + default = val_to_label.get(priority, opts[-1]) + idx = opts.index(default) + + # 3. Render the dropdown + new_label = st.selectbox( + "Priority", + options=opts, + index=idx, + key=f"prio_select_{project_id}", + label_visibility="collapsed" + ) + new_priority = label_to_val[new_label] + + # 4. Push change when clicked + if new_priority != priority: + if st.button("Update Priority", key=f"prio_btn_{project_id}"): + try: + r = requests.put(f"http://web-api:4000/goals/{project_id}/priority", timeout=5) + if r.status_code == 200: + st.success("Priority updated!") + st.rerun() + else: + st.error(f"Failed ({r.status_code})") + except Exception as e: + st.error(f"Error updating priority of project: {e}") + + # 5. And still allow marking complete + st.write("") # spacer + if completed == 0: + if st.button("Mark Complete", key=f"complete_{project_id}"): + try: + response = requests.put(f'http://web-api:4000/goals/{project_id}/complete', timeout=5) + if response.status_code == 200: + st.success("Project marked as completed!") + st.rerun() + else: + st.error(f"Error: {response.status_code}") + except Exception as e: + st.error(f"Error updating project: {str(e)}") + + else: + st.write("✅ Completed") + + st.write("---") + + + +# Action buttons at bottom +st.write("---") +bottom_col1, bottom_col2, bottom_col3 = st.columns(3) + +with bottom_col1: + if st.button("🚨 Create New Project", type="primary", use_container_width=True): + st.switch_page('pages/Add_New_Project.py') + +with bottom_col2: + if st.button("🗑 Delete Project", type="primary", use_container_width=True): + st.switch_page('pages/Delete_Project.py') + +with bottom_col3: + if st.button("🏠 Return To Dashboard", type="primary", use_container_width=True): + st.switch_page('HomePage.py') \ No newline at end of file diff --git a/app/src/pages/JackHomePage.py b/app/src/pages/JackHomePage.py new file mode 100644 index 0000000000..e3b0314334 --- /dev/null +++ b/app/src/pages/JackHomePage.py @@ -0,0 +1,377 @@ +import logging +logger = logging.getLogger(__name__) +logging.basicConfig(format='%(filename)s:%(lineno)s:%(levelname)s -- %(message)s', level=logging.INFO) + +from modules.nav import SideBarLinks +import streamlit as st +import requests +import pandas as pd +import plotly.express as px +import re + +# ---------------------- Utils ---------------------- +HEX_PATTERN = re.compile(r"^#(?:[0-9a-fA-F]{3}){1,2}$") + +def safe_hex_color(c: str, default="#999999") -> str: + if isinstance(c, str) and HEX_PATTERN.match(c.strip()): + return c.strip() + return default + +def normalize_tag_row(row): + """Make backend row (dict or tuple/list) into {'id','name','color'}.""" + if isinstance(row, dict): + return {"id": row.get("id") or row.get("tag_id"), + "name": row.get("name"), + "color": row.get("color")} + if isinstance(row, (list, tuple)): + id_, name, color = (list(row) + [None, None, None])[:3] + return {"id": id_, "name": name, "color": color} + return {"id": None, "name": "Unnamed", "color": "#999999"} + +def is_json_2xx(resp): + return (200 <= resp.status_code < 300) and resp.headers.get("content-type","").lower().startswith("application/json") + +# ---------------------- Page Init ---------------------- +st.set_page_config(layout='wide') +st.session_state['authenticated'] = False +SideBarLinks(show_home=True) + +# keep just IDs of tags created during this session, but persist them via URL +def _get_query_tag_ids(): + try: + csv = st.query_params.get("tag_ids", "") + except Exception: + params = st.experimental_get_query_params() + csv = params.get("tag_ids", [""])[0] if "tag_ids" in params else "" + ids = [] + for tok in (csv.split(",") if csv else []): + tok = tok.strip() + if tok.isdigit(): + ids.append(int(tok)) + return ids + +def _set_query_tag_ids(ids): + csv = ",".join(str(i) for i in ids) + try: + st.query_params["tag_ids"] = csv + except Exception: + st.experimental_set_query_params(tag_ids=csv) + +if "created_tag_ids" not in st.session_state: + st.session_state["created_tag_ids"] = _get_query_tag_ids() + +def _sync_from_query_if_needed(): + q_ids = _get_query_tag_ids() + if q_ids != st.session_state.get("created_tag_ids", []): + st.session_state["created_tag_ids"] = q_ids + +def _sync_to_query(): + _set_query_tag_ids(st.session_state["created_tag_ids"]) + +# Header +st.title("💼 Whats up, Jack?") + +# ---------------------- Main Layout ---------------------- +col1, col2 = st.columns([3, 1]) + +with col1: + # SETS THE USER_ID BASED UPON SESSION STATE. + user_id = st.session_state.get("user_id") + if not user_id: + st.error("No user ID found. Please log in or select a user profile.") + st.stop() + user_id = 4 # Incase you refresh the page and it "logs you out" or something. + # GOALS + try: + goals = requests.get(f'http://web-api:4000/goals/user/{user_id}/active_and_priority').json() + except Exception as e: + st.error(f"Could not fetch goals: {e}") + goals = [] + + goals = [ + [item.get("id"), item.get("title"), item.get("notes"), item.get("schedule")] + for item in goals + ] + + # SUBGOALS + subgoals = requests.get('http://web-api:4000/goals/subgoals').json() + subgoals = [ + [item.get("goalsId"), item.get("title")] + for item in subgoals + ] + + # HEADER + goal_col1, goal_col2, goal_col3 = st.columns([3, 1, 1.5]) + with goal_col1: + st.subheader("**Goal**", divider=True) + with goal_col2: + st.subheader("**Tags**", divider=True) + + for goal_id, title, notes, schedule in goals: + # Fetch tags for a goal (goal-specific route, unchanged) + try: + resp = requests.get(f"http://web-api:4000/tags/goals/{goal_id}/tags", timeout=5) + tags_raw = resp.json() if resp.headers.get("content-type","").lower().startswith("application/json") else [] + except Exception as e: + tags_raw = [] + st.warning(f"Failed to load tags for goal {goal_id}: {e}") + + # Normalize for display + tags_for_display = [] + for t in tags_raw: + nt = normalize_tag_row(t) + tags_for_display.append({ + "name": nt.get("name") or "Unnamed", + "color": safe_hex_color(nt.get("color"), "#999999") + }) + + with st.container(): + g1, g2, g3 = st.columns([3, 1.5, 1]) + + with g1: + st.write(f":red[**{title}**]") + if notes: + st.write(notes) + for sub in subgoals: + if sub[0] == goal_id: + st.write(f"- {sub[1]}") + + with g2: + if tags_for_display: + tag_htmls = [ + f"{t['name']}" + for t in tags_for_display + ] + st.markdown("".join(tag_htmls), unsafe_allow_html=True) + else: + st.write("—") # placeholder if no tags + + with g3: + if st.button("Mark Complete", key=f"complete_{goal_id}"): + try: + response = requests.put(f'http://web-api:4000/goals/{goal_id}/complete', timeout=5) + if response.status_code == 200: + st.success("Goal marked as completed!") + st.rerun() + else: + st.error(f"Error: {response.status_code}") + except Exception as e: + st.error(f"Error updating goal: {str(e)}") + + st.write("---") + +with col2: + st.header("📊 Goal Status Overview") + + # Fetch goals for charts + goals = requests.get(f'http://web-api:4000/goals/user/{user_id}/active_and_priority').json() + df = pd.DataFrame(goals) + + if 'status' not in df.columns: + df['status'] = 'ACTIVE' + + # Bar chart: goals by status + status_counts = df['status'].value_counts().reset_index() + status_counts.columns = ['Status', 'Count'] + + color_map = { + 'ACTIVE': 'orange', + 'PLANNED': 'blue', + 'ON ICE': 'gray', + 'ARCHIVED': 'green' + } + + st.subheader("Goals by Status") + fig = px.bar( + status_counts, + x='Status', + y='Count', + color='Status', + color_discrete_map=color_map + ) + st.plotly_chart(fig, use_container_width=True) + + # Scatter: goals vs deadline + st.subheader("Goals vs Deadline") + + df_scatter = pd.DataFrame(goals) + df_scatter['schedule'] = pd.to_datetime(df_scatter.get('schedule', pd.NaT)) + df_scatter['priority'] = df_scatter.get('priority', 'high') + df_scatter['status'] = df_scatter.get('status', 'PLANNED') + df_scatter['title'] = df_scatter.get('title', 'Untitled') + + fig2 = px.scatter( + df_scatter, + x='schedule', + y='priority', + hover_data=['title', 'notes', 'priority'], + labels={'priority': 'Priority', 'schedule': 'Deadline'}, + title='High Priority Goals', + height=500 + ) + + fig2.update_yaxes(showgrid=True, gridcolor='lightgray') + st.plotly_chart(fig2, use_container_width=True) + + +# ---------------------- Tags (server-backed; strict 2xx+JSON; refresh-persistent) ---------------------- +API_BASE = "http://web-api:4000" + +def _first_json_2xx(responses): + """Return first response that is 2xx and application/json.""" + for r in responses: + if r is not None and is_json_2xx(r): + return r + return None + +def _try_paths_json_2xx(method, paths, **kwargs): + """Try paths in order; return first 2xx+JSON response (or first 2xx for DELETE).""" + last_exc = None + results = [] + for p in paths: + try: + if method == "GET": + r = requests.get(API_BASE + p, timeout=kwargs.get("timeout", 5)) + elif method == "POST": + r = requests.post(API_BASE + p, json=kwargs.get("json", {}), timeout=kwargs.get("timeout", 5)) + elif method == "DELETE": + r = requests.delete(API_BASE + p, timeout=kwargs.get("timeout", 5)) + else: + raise ValueError("Unsupported method") + results.append(r) + except Exception as e: + last_exc = e + results.append(None) + + if method in ("GET", "POST"): + ok = _first_json_2xx(results) + if ok is not None: + return ok + else: # DELETE: accept any 2xx (no need for JSON) + for r in results: + if r is not None and 200 <= r.status_code < 300: + return r + + # If none matched, raise the last exception or an informative error + if last_exc: + raise last_exc + raise RuntimeError(f"No {method} path returned 2xx JSON (paths tried: {paths})") + +def api_create_tag(name: str, color: str): + payload = {"name": (name or "").strip(), "color": (color or "").strip()} + # Your blueprint has route "/create_tag". If app registered it with url_prefix="/tags", full path becomes "/tags/create_tag". + return _try_paths_json_2xx("POST", ["/create_tag", "/tags/create_tag"], json=payload) + +def api_delete_tag(tag_id: int): + pid = int(tag_id) + # Your blueprint has "/delete_tag/". With url_prefix="/tags" it becomes "/tags/delete_tag/". + return _try_paths_json_2xx("DELETE", [f"/delete_tag/{pid}", f"/tags/delete_tag/{pid}"]) + +def api_get_tag_by_id(tag_id: int): + pid = int(tag_id) + # Your blueprint route is "/tags/". If url_prefix="/tags", it becomes "/tags/tags/". + return _try_paths_json_2xx("GET", [f"/tags/{pid}", f"/tags/tags/{pid}"]) + +st.write("### 🏷️ Tags") + +# Create +with st.expander("Create a tag", expanded=True): + c1, c2, c3 = st.columns([2, 1, 1]) + with c1: + new_tag_name = st.text_input("Name (optional)", key="tag_new_name") + with c2: + new_tag_color = st.color_picker("Color (required)", value="#ff9900", key="tag_new_color") + with c3: + if st.button("Create", key="btn_create_tag", use_container_width=True, type="primary"): + if not new_tag_color or not HEX_PATTERN.match(new_tag_color.strip()): + st.error("Valid hex color is required.") + else: + try: + r = api_create_tag(new_tag_name or "", new_tag_color) + # must be 2xx JSON here + data = r.json() + tag_id = data.get("tag_id") or data.get("id") + if tag_id is None: + st.error("Create succeeded but server did not return tag_id.") + else: + try: + rid = int(tag_id) + _sync_from_query_if_needed() + if rid not in st.session_state["created_tag_ids"]: + st.session_state["created_tag_ids"].append(rid) + _sync_to_query() + st.success("Tag created.") + st.rerun() + except Exception: + st.error(f"Invalid tag_id from server: {tag_id}") + except Exception as e: + st.error(f"Create failed: {e}") + +# Delete by ID (manual) +with st.expander("Delete a tag by ID", expanded=False): + d1, d2 = st.columns([2, 1]) + with d1: + del_id = st.text_input("Tag ID", key="quick_del_id", placeholder="e.g., 42") + with d2: + if st.button("Delete", key="quick_del_btn", use_container_width=True): + did = (del_id or "").strip() + if not did.isdigit(): + st.error("Tag ID must be a number.") + else: + try: + rid = int(did) + api_delete_tag(rid) # raises if no 2xx + _sync_from_query_if_needed() + st.session_state["created_tag_ids"] = [x for x in st.session_state["created_tag_ids"] if x != rid] + _sync_to_query() + st.success("Tag deleted.") + st.rerun() + except Exception as e: + st.error(f"Delete failed: {e}") + +# Render (fetch each from SERVER so name/color persist across refresh) +_sync_from_query_if_needed() +server_tags = [] +for tid in st.session_state["created_tag_ids"]: + try: + r = api_get_tag_by_id(int(tid)) # must be 2xx JSON + server_tags.append(r.json()) + except Exception as e: + st.warning(f"Failed to fetch tag {tid}: {e}") + +st.caption(f"Created (server): {len(server_tags)} tag(s)") + +for idx, t in enumerate(server_tags): + # robust extraction + tid = t.get("id") or t.get("tag_id") + name = t.get("name") or "" + color = safe_hex_color(t.get("color"), "#999999") + try: + tid_int = int(tid) + except Exception: + tid_int = None + uniq = f"{tid}_{idx}" + + with st.container(): + r1, r2 = st.columns([5, 1]) + with r1: + st.write(f"**{name}** · `{tid}`") + st.markdown( + f"
", + unsafe_allow_html=True + ) + with r2: + if st.button("Delete", key=f"btn_delete_tag_{uniq}", use_container_width=True): + if tid_int is None: + st.error("Cannot delete: invalid tag id.") + else: + try: + api_delete_tag(tid_int) # raises if no 2xx + _sync_from_query_if_needed() + st.session_state["created_tag_ids"] = [x for x in st.session_state["created_tag_ids"] if x != tid_int] + _sync_to_query() + st.success("Tag deleted.") + st.rerun() + except Exception as e: + st.error(f"Delete failed: {e}") + st.write("---") diff --git a/app/src/pages/JoseHomePage.py b/app/src/pages/JoseHomePage.py new file mode 100644 index 0000000000..f983e115ee --- /dev/null +++ b/app/src/pages/JoseHomePage.py @@ -0,0 +1,149 @@ +import logging +logger = logging.getLogger(__name__) +logging.basicConfig(format='%(filename)s:%(lineno)s:%(levelname)s -- %(message)s', level=logging.INFO) +from modules.nav import SideBarLinks +import streamlit as st +import requests +import pandas as pd +import plotly.express as px + + +st.set_page_config(layout = 'wide') +st.session_state['authenticated'] = False +SideBarLinks(show_home=True) + +# Header +st.title("🛠️ Whats up, Jose?") + +# Create main layout: left column (system issues) and right column (quick actions + charts) +col1, col2 = st.columns([2, 1]) + + + +with col1: + st.write("### Uncompleted Bugs") + + bugs = requests.get('http://web-api:4000/support/bugs').json() + bugs = [list(item.values()) for item in bugs] + #0 - completed + #1 - title + #2 = id + #3 - priority + #4 - desc + + st.write("---") + bug_col1, bug_col2, bug_col3 = st.columns([2, 1, 1]) + with bug_col1: st.write("**Bug**") + with bug_col2: st.write("**Priority**") + with bug_col3: st.write("**Completion**") + st.write("---") + + for bug in bugs: + with st.container(): + + bug_col1, bug_col2, bug_col3 = st.columns([2, 1, 1]) + with bug_col1: + st.write(f":red[**{bug[1]}**]") #title + st.write(bug[4]) #desc + with bug_col2: + p = bug[3] + if p == "critical": + st.markdown(f":red[**🔴 Critical!**]") + elif p == "high": + st.markdown(f":orange[**🟠 High**]") + elif p == "medium": + st.markdown("🟡 Medium", unsafe_allow_html=True) + elif p == "low": + st.markdown(f":green[**🟢 Low**]") + else: + st.write(p) + with bug_col3: + if bug[0] == 0: + if st.button("Mark Complete", key=f"complete_{bug[2]}"): + try: + response = requests.put(f'http://web-api:4000/support/bugs/{bug[2]}/complete') + if response.status_code == 200: + st.success("Bug marked as completed!") + st.rerun() # Refresh the page to show updated status + else: + st.error(f"Error: {response.status_code}") + except Exception as e: + st.error(f"Error updating bug: {str(e)}") + st.write("---") + + + +with col2: + + # System Charts Section + st.write("### 📊 App Statistics") + userstats = requests.get('http://web-api:4000/users/appstats').json() + userstats = [list(item.values()) for item in userstats] + + def make_userstats(data): + df = pd.DataFrame(data, columns=['registration_date', 'user_id']) + + # Convert string dates to datetime + df['registration_date'] = pd.to_datetime(df['registration_date']) + df['date'] = df['registration_date'].dt.date + + # User count + df = df.sort_values('date').reset_index(drop=True) + df['user_count'] = range(1, len(df) + 1) + + return df + df = make_userstats(userstats) + + fig_users = px.line(df, x='date', y='user_count', title="User Growth Trends") + fig_users.update_layout(height=200, showlegend=True, title_font_size=12, margin=dict(l=0, r=0, t=30, b=0)) + st.plotly_chart(fig_users, use_container_width=True) + + # # Bug status pie chart + # bug_data = pd.DataFrame({ + # 'Status': ['Fixed', 'In Progress', 'Open', 'Testing'], + # 'Count': [15, 8, 5, 3] + # }) + + # fig_bugs = px.pie(bug_data, values='Count', names='Status', + # title="Bug Report Status", + # color_discrete_map={'Fixed': '#2ca02c', + # 'In Progress': '#ff7f0e', + # 'Open': '#d62728', + # 'Testing': '#9467bd'}) + # fig_bugs.update_layout(height=200, title_font_size=12, + # margin=dict(l=0, r=0, t=30, b=0)) + # st.plotly_chart(fig_bugs, use_container_width=True) + +# Bottom metrics section +# st.write("---") +# st.write("### 📈 SYSTEM METRICS") + +# metric_col1, metric_col2, metric_col3, metric_col4 = st.columns(4) + +# with metric_col1: +# st.metric( +# label="Total Users", +# value="2,100", +# delta="180 new this month" +# ) + +# with metric_col2: +# st.metric( +# label="Active Bug Reports", +# value="13", +# delta="-5 resolved today" +# ) + +# with metric_col3: +# st.metric( +# label="System Uptime", +# value="99.8%", +# delta="0.2% improvement" +# ) + +# with metric_col4: +# st.metric( +# label="User Satisfaction", +# value="4.6/5", +# delta="0.3 increase" +# ) diff --git a/app/src/pages/On_Ice.py b/app/src/pages/On_Ice.py new file mode 100644 index 0000000000..70f717c7e7 --- /dev/null +++ b/app/src/pages/On_Ice.py @@ -0,0 +1,37 @@ +import streamlit as st +import mysql.connector +import pandas as pd + +st.title("Backlog") + +@st.cache_resource +def init_connection(): + return mysql.connector.connect( + host="localhost", + port=3306, + user="root", + password="1203", + database="global-GoalFlow" + ) + +def run_query(query, params=None): + conn = init_connection() + cursor = conn.cursor() + cursor.execute(query, params or ()) + result = cursor.fetchall() + cursor.close() + return result + +on_ice = run_query("SELECT title, notes FROM goals WHERE status = 'ON ICE'") +print(on_ice) + +for goal in on_ice: + col1, col2 = st.columns([4, 1]) + + with col1: + st.write(f"**{goal[1]}**") + + with col2: + if st.button("Activate", key=f"activate_{goal[0]}"): + # activate_goal(goal[0]) + st.rerun() \ No newline at end of file diff --git a/database-files/01_gflow_db.sql b/database-files/01_gflow_db.sql new file mode 100644 index 0000000000..2d19dd3d40 --- /dev/null +++ b/database-files/01_gflow_db.sql @@ -0,0 +1,365 @@ +-- Drops the old, and recreates the database. Then use it. +DROP DATABASE IF EXISTS `global-GoalFlow`; +CREATE DATABASE `global-GoalFlow`; +USE `global-GoalFlow`; + + + + + +-- USERS TABLE +DROP TABLE IF EXISTS users; +CREATE TABLE IF NOT EXISTS users ( +firstName VARCHAR(50) NOT NULL, +middleName VARCHAR(50), +lastName VARCHAR(50) NOT NULL, +phoneNumber VARCHAR(15), +email VARCHAR(75) NOT NULL, +role ENUM('user', 'manager', 'admin') NOT NULL, +planType ENUM('free', 'standard', 'enterprise') NOT NULL DEFAULT 'free', +manages INT, +id INT AUTO_INCREMENT NOT NULL, -- DIF. NAME + +PRIMARY KEY (id), + +UNIQUE INDEX uq_idx_phoneNumber (phoneNumber), +UNIQUE INDEX uq_idx_email (email), +INDEX idx_manages (manages), +INDEX idx_role (role), + +FOREIGN KEY (manages) REFERENCES users(id) + ON DELETE SET NULL + ON UPDATE CASCADE +); + + + + + +-- TAGS TABLE +DROP TABLE IF EXISTS tags; +CREATE TABLE IF NOT EXISTS tags ( +name VARCHAR(50), +color VARCHAR(7) NOT NULL DEFAULT '#ffffff', +id INT AUTO_INCREMENT NOT NULL, -- DIF. NAME + +PRIMARY KEY (id), + +INDEX idx_name (name) +); + + + + + +-- POSTS TABLE +DROP TABLE IF EXISTS posts; +CREATE TABLE IF NOT EXISTS posts ( +authorId INT NOT NULL, -- NEEDED EXTRA ATTRIBUTE +title VARCHAR(75) NOT NULL, +metaTitle VARCHAR(100), -- EXTRA ATTRIBUTE +createdAt DATETIME NOT NULL, +updatedAt DATETIME, -- EXTRA ATTRIBUTE +publishedAt DATETIME, -- EXTRA ATTRIBUTE +content TEXT, +id INT AUTO_INCREMENT NOT NULL, -- DIF. NAME + +PRIMARY KEY (id), + +INDEX idx_authorId (authorId), + +FOREIGN KEY (authorId) REFERENCES users(id) + ON UPDATE RESTRICT + ON DELETE CASCADE +); + + + + + +-- POSTS_TAGS - (BRIDGE TABLE) +DROP TABLE IF EXISTS posts_tags; +CREATE TABLE posts_tags ( +post_id INT NOT NULL, +tag_id INT NOT NULL, + +PRIMARY KEY (post_id, tag_id), + +INDEX idx_posts_tags_post (post_id), +INDEX idx_posts_tags_tag (tag_id), + +FOREIGN KEY (post_id) REFERENCES posts(id) + ON DELETE CASCADE + ON UPDATE CASCADE, + +FOREIGN KEY (tag_id) REFERENCES tags(id) + ON DELETE CASCADE + ON UPDATE CASCADE +); + + + + + +-- POST_REPLY TABLE +DROP TABLE IF EXISTS post_reply; +CREATE TABLE IF NOT EXISTS post_reply ( +userId INT NOT NULL, -- NEEDED EXTRA ATTRIBUTE +postId INT NOT NULL, -- NEEDED EXTRA ATTRIBUTE +title VARCHAR(100) NOT NULL, +createdAt DATETIME NOT NULL, +publishedAt DATETIME, -- EXTRA ATTRIBUTE +content TEXT, +id INT AUTO_INCREMENT NOT NULL, -- DIF. NAME + +PRIMARY KEY (id), + +INDEX index_userId (userId), +INDEX index_postId (postId), + +FOREIGN KEY (userId) REFERENCES users(id) + ON UPDATE RESTRICT + ON DELETE CASCADE, + +FOREIGN KEY (postId) REFERENCES posts(id) + ON UPDATE RESTRICT + ON DELETE CASCADE +); + + + + +-- POST_REPLY_TAGS - (BRIDGE TABLE) +DROP TABLE IF EXISTS post_reply_tags; +CREATE TABLE post_reply_tags ( +post_reply_id INT NOT NULL, +tag_id INT NOT NULL, + +PRIMARY KEY (post_reply_id, tag_id), + +INDEX idx_post_reply_tags_pr (post_reply_id), +INDEX idx_post_reply_tags_tag (tag_id), + +FOREIGN KEY (post_reply_id) REFERENCES post_reply(id) + ON DELETE CASCADE + ON UPDATE CASCADE, + +FOREIGN KEY (tag_id) REFERENCES tags(id) + ON DELETE CASCADE + ON UPDATE CASCADE +); + + + + + +-- USER_DATA TABLE +DROP TABLE IF EXISTS user_data; +CREATE TABLE IF NOT EXISTS user_data ( +userId INT NOT NULL, -- NEEDED EXTRA ATTRIBUTE +location VARCHAR(100), -- city name, state.? +deviceType ENUM('mobile', 'tablet', 'desktop') NOT NULL, +age TINYINT UNSIGNED, +registeredAt VARCHAR(100) NOT NULL, +lastLogin DATETIME, -- EXTRA ATTRIBUTE +isActive TINYINT(1) NOT NULL DEFAULT 1, -- EXTRA ATTRIBUTE +id INT AUTO_INCREMENT NOT NULL, -- DIF. NAME + +PRIMARY KEY (id), + +INDEX idx_userId (userId), +INDEX idx_deviceType (deviceType), +INDEX idx_lastLogin (lastLogin), + +FOREIGN KEY (userId) REFERENCES users(id) + ON UPDATE RESTRICT + ON DELETE CASCADE +); + + + + + +-- BUG_REPORTS TABLE +DROP TABLE IF EXISTS bug_reports; +CREATE TABLE IF NOT EXISTS bug_reports ( +userId INT NOT NULL, -- NEEDED EXTRA ATTRIBUTE +title VARCHAR(75) NOT NULL, +metaTitle VARCHAR(100), -- EXTRA ATTRIBUTE +description TEXT, +dateReported DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, +completed TINYINT(1) NOT NULL DEFAULT 0, -- 0 = Not Completed, 1 = Completed +priority ENUM('critical', 'high', 'medium', 'low') NOT NULL DEFAULT 'low', -- where 1 = critical, 2 = high, 3 = medium, 4 = low +id INT AUTO_INCREMENT NOT NULL, -- DIF. NAME + +PRIMARY KEY (id), + +INDEX idx_copmleted (completed), +INDEX idx_priority (priority), +INDEX idx_dateReported (dateReported), + +FOREIGN KEY (userId) REFERENCES users(id) + ON UPDATE RESTRICT + ON DELETE CASCADE +); + + + + + +-- CONSISTENT_TASKS TABLE +DROP TABLE IF EXISTS consistent_tasks; +CREATE TABLE IF NOT EXISTS consistent_tasks ( +userId INT NOT NULL, -- NEEDED EXTRA ATTRIBUTE +title VARCHAR(75) NOT NULL, +metaTitle VARCHAR(100), -- EXTRA ATTRIBUTE +category VARCHAR(100), +createdAt DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, +notes TEXT, +id INT AUTO_INCREMENT NOT NULL, -- DIF. NAME + +PRIMARY KEY (id), + +INDEX idx_category (category), +INDEX idx_createdAt (createdAt), + +FOREIGN KEY (userId) REFERENCES users(id) + ON UPDATE RESTRICT + ON DELETE CASCADE +); + + + + + +-- DAILY_TASKS TABLE +DROP TABLE IF EXISTS daily_tasks; +CREATE TABLE IF NOT EXISTS daily_tasks ( +userId INT NOT NULL, -- NEEDED EXTRA ATTRIBUTE +title VARCHAR(75) NOT NULL, +metaTitle VARCHAR(100), -- EXTRA ATTRIBUTE +status ENUM('ON ICE', 'PLANNED', 'ACTIVE', 'ARCHIVED') NOT NULL DEFAULT 'PLANNED', -- ON ICE, PLANNED, ACTIVE, ARCHIVED +completed TINYINT(1) NOT NULL DEFAULT 0, +schedule DATE, +notes TEXT, +id INT AUTO_INCREMENT NOT NULL, -- DIF. NAME + +PRIMARY KEY (id), + +INDEX idx_userId (userId), +INDEX idx_schedule (schedule), +INDEX idx_status (status), + +FOREIGN KEY (userId) REFERENCES users(id) + ON UPDATE RESTRICT + ON DELETE CASCADE +); + + + + + +-- DAILY_TASKS_TAGS - (BRIDGE TABLE) +DROP TABLE IF EXISTS daily_tasks_tags; +CREATE TABLE daily_tasks_tags ( +daily_task_id INT NOT NULL, +tag_id INT NOT NULL, + +PRIMARY KEY (daily_task_id, tag_id), + +INDEX idx_daily_tasks_tags_dt (daily_task_id), +INDEX idx_daily_tasks_tags_tag (tag_id), + +FOREIGN KEY (daily_task_id) REFERENCES daily_tasks(id) + ON DELETE CASCADE + ON UPDATE CASCADE, + +FOREIGN KEY (tag_id) REFERENCES tags(id) + ON DELETE CASCADE + ON UPDATE CASCADE +); + + + + + +-- GOALS TABLE +DROP TABLE IF EXISTS goals; +CREATE TABLE IF NOT EXISTS goals ( +userId INT NOT NULL, -- NEEDED EXTRA ATTRIBUTE +title VARCHAR(75) NOT NULL, +notes TEXT, +onIce TINYINT(1) NOT NULL DEFAULT 0, +status ENUM('ON ICE', 'PLANNED', 'ACTIVE', 'ARCHIVED') NOT NULL DEFAULT 'PLANNED', -- ON ICE, PLANNED, ACTIVE, ARCHIVED +priority ENUM('critical', 'high', 'medium', 'low') NOT NULL DEFAULT 'low', -- where 1 = critical, 2 = high, 3 = medium, 4 = low +createdAt DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, +completedAt DATETIME, +completed TINYINT(1) NOT NULL DEFAULT 0, -- EXTRA ATTRIBUTE, NOT SURE +schedule DATE, -- ADDED ? +id INT AUTO_INCREMENT NOT NULL, -- DIF. NAME + +PRIMARY KEY (id), + +INDEX idx_userId (userId), +INDEX idx_status (status), +INDEX idx_priority (priority), +INDEX idx_createdAt (createdAt), +INDEX idx_completedAt (completedAt), + +FOREIGN KEY (userId) REFERENCES users(id) + ON UPDATE RESTRICT + ON DELETE CASCADE +); + + + + + +-- GOALS_TAGS - (BRIDGE TABLE) +DROP TABLE IF EXISTS goals_tags; +CREATE TABLE goals_tags ( +goal_id INT NOT NULL, +tag_id INT NOT NULL, + +PRIMARY KEY (goal_id, tag_id), + +INDEX idx_goals_tags_goal (goal_id), +INDEX idx_goals_tags_tag (tag_id), + +FOREIGN KEY (goal_id) REFERENCES goals(id) + ON DELETE CASCADE + ON UPDATE CASCADE, + +FOREIGN KEY (tag_id) REFERENCES tags(id) + ON DELETE CASCADE + ON UPDATE CASCADE +); + + + + + +-- SUBGOALS TABLE +DROP TABLE IF EXISTS subgoals; +CREATE TABLE IF NOT EXISTS subgoals ( +goalsId INT NOT NULL, -- NEEDED EXTRA ATTRIBUTE +title VARCHAR(75) NOT NULL, +notes TEXT, +status ENUM('ON ICE', 'PLANNED', 'ACTIVE', 'ARCHIVED') NOT NULL DEFAULT 'PLANNED', -- ON ICE, PLANNED, ACTIVE, ARCHIVED +createdAt DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, -- EXTRA ATTRIBUTE +completedAt DATETIME, -- EXTRA ATTRIBUTE +completed TINYINT(1) NOT NULL DEFAULT 0, +schedule DATE, -- ADDED ? +id INT AUTO_INCREMENT NOT NULL, -- DIF. NAME + +PRIMARY KEY (id), + +INDEX idx_goalsId (goalsId), +INDEX idx_status (status), +INDEX idx_createdAt (createdAt), +INDEX idx_completedAt (completedAt), + +FOREIGN KEY (goalsId) REFERENCES goals(id) + ON UPDATE RESTRICT + ON DELETE CASCADE +); \ No newline at end of file diff --git a/database-files/02_gflow_mockdata.sql b/database-files/02_gflow_mockdata.sql new file mode 100644 index 0000000000..34ebf9f05a --- /dev/null +++ b/database-files/02_gflow_mockdata.sql @@ -0,0 +1,1018 @@ +-- Enter the SQL database. +USE `global-GoalFlow`; +SET FOREIGN_KEY_CHECKS = 0; + + + + + +-- ========== USERS (40 rows) ========== +INSERT INTO users (id, firstName, middleName, lastName, phoneNumber, email, role, planType, manages) +VALUES +-- 1..4: Personas (must be first four) +(1, 'Avery', NULL, 'Reyes', '555-1001', 'avery.reyes@example.com', 'user', 'standard', NULL), -- Freelance Designer +(2, 'Alan', 'J.', 'Jaden', '555-1002', 'alan.jaden@university.edu', 'user', 'standard', NULL), -- Dr. Alan +(3, 'Jose', NULL, 'Rivera', '555-1003', 'jose@goalflow.co', 'admin', 'enterprise', NULL), -- Jose +(4, 'Jack', NULL, 'Morris', '555-1004', 'jack.morris@meta.example', 'manager', 'enterprise', NULL), -- Jack +-- 5..40: Additional realistic users to fill out dataset +(5, 'Taylor', NULL, 'Nguyen', '555-1005', 'taylor.nguyen@example.com', 'user', 'free', 2), +(6, 'Sam', NULL, 'Patel', '555-1006', 'sam.patel@example.com', 'user', 'free', 2), +(7, 'Maya', NULL, 'Ortiz', '555-1007', 'maya.ortiz@example.com', 'user', 'standard', 2), +(8, 'Priya', NULL, 'Singh', '555-1008', 'priya.singh@example.com', 'user', 'standard', 3), +(9, 'Omar', NULL, 'Khan', '555-1009', 'omar.khan@example.com', 'user', 'free', 3), +(10, 'Lena', NULL, 'Gonzalez', '555-1010', 'lena.gonzalez@example.com', 'user', 'standard', 3), +(11, 'Noah', NULL, 'White', '555-1011', 'noah.white@example.com', 'user', 'free', 2), +(12, 'Ivy', NULL, 'Cole', '555-1012', 'ivy.cole@example.com', 'user', 'standard', 5), +(13, 'Ben', NULL, 'Miller', '555-1013', 'ben.miller@example.com', 'user', 'free', 5), +(14, 'Rosa', NULL, 'Torres', '555-1014', 'rosa.torres@example.com', 'user', 'standard', 2), +(15, 'Ethan', NULL, 'Gomez', '555-1015', 'ethan.gomez@example.com', 'user', 'free', 3), +(16, 'Zoe', NULL, 'Hart', '555-1016', 'zoe.hart@example.com', 'user', 'standard', 3), +(17, 'Miguel', NULL, 'Reyes', '555-1017', 'miguel.reyes@example.com', 'user', 'free', 4), +(18, 'Lila', NULL, 'Bennett', '555-1018', 'lila.bennett@example.com', 'user', 'standard', 4), +(19, 'Connor', NULL, 'Walsh', '555-1019', 'connor.walsh@example.com', 'user', 'free', 3), +(20, 'Asha', NULL, 'Desai', '555-1020', 'asha.desai@example.com', 'user', 'standard', 2), +(21, 'Ravi', NULL, 'Kapoor', '555-1021', 'ravi.kapoor@example.com', 'user', 'free', 3), +(22, 'Nina', NULL, 'Park', '555-1022', 'nina.park@example.com', 'user', 'standard', 2), +(23, 'Carl', NULL, 'Bell', '555-1023', 'carl.bell@example.com', 'user', 'free', 5), +(24, 'Marta', NULL, 'Santos', '555-1024', 'marta.santos@example.com', 'user', 'standard', 5), +(25, 'Hank', NULL, 'Yates', '555-1025', 'hank.yates@example.com', 'user', 'free', 2), +(26, 'Gina', NULL, 'Ivers', '555-1026', 'gina.ivers@example.com', 'user', 'standard', 2), +(27, 'Leo', NULL, 'Park', '555-1027', 'leo.park@example.com', 'user', 'free', 4), +(28, 'Wendy', NULL, 'Lopez', '555-1028', 'wendy.lopez@example.com', 'user', 'standard', 4), +(29, 'Tomas', NULL, 'Silva', '555-1029', 'tomas.silva@example.com', 'user', 'free', 2), +(30, 'Kara', NULL, 'Evans', '555-1030', 'kara.evans@example.com', 'user', 'standard', 2), +(31, 'Drew', NULL, 'Fleming', '555-1031', 'drew.fleming@example.com', 'user', 'free', 3), +(32, 'Hana', NULL, 'Kato', '555-1032', 'hana.kato@example.com', 'user', 'standard', 3), +(33, 'Oli', NULL, 'Simmons', '555-1033', 'oli.simmons@example.com', 'user', 'free', 2), +(34, 'Bea', NULL, 'Marshall', '555-1034', 'bea.marshall@example.com', 'user', 'standard', 3), +(35, 'Arun', NULL, 'Shah', '555-1035', 'arun.shah@example.com', 'user', 'free', 2), +(36, 'Jill', NULL, 'Parker', '555-1036', 'jill.parker@example.com', 'user', 'standard', 5), +(37, 'Kyle', NULL, 'Reid', '555-1037', 'kyle.reid@example.com', 'user', 'free', 3), +(38, 'Sofia', NULL, 'Mendez', '555-1038', 'sofia.mendez@example.com', 'user', 'standard', 4), +(39, 'Paul', NULL, 'Nash', '555-1039', 'paul.nash@example.com', 'user', 'free', 2), +(40, 'Rina', NULL, 'Okafor', '555-1040', 'rina.okafor@example.com', 'user', 'standard', 2); + + + + + +-- ========== TAGS (30 rows) ========== +INSERT INTO tags (id, name, color) +VALUES +(1, 'Urgent', '#ff0000'), +(2, 'Feature', '#00aaff'), +(3, 'Bug', '#ff8800'), +(4, 'Research', '#ffd700'), +(5, 'Personal', '#ff00ff'), +(6, 'Maintenance', '#008800'), +(7, 'LowPriority', '#cccccc'), +(8, 'HighPriority', '#cc0000'), +(9, 'UX', '#ff66cc'), +(10, 'Backend', '#3333ff'), +(11, 'Frontend', '#33ccff'), +(12, 'DevOps', '#00cc66'), +(13, 'Analytics', '#6600ff'), +(14, 'Design', '#ff9966'), +(15, 'QA', '#ff0066'), +(16, 'Security', '#990000'), +(17, 'Customer', '#0066ff'), +(18, 'Integration', '#009999'), +(19, 'Alpha', '#66ff66'), +(20, 'Experiment', '#9999ff'), +(21, 'Onboarding', '#ffcc00'), +(22, 'Docs', '#777777'), +(23, 'Mobile', '#cc99ff'), +(24, 'Web', '#339900'), +(25, 'Performance', '#ff4444'), +(26, 'Compliance', '#4444ff'), +(27, 'Refactor', '#aaaaaa'), +(28, 'Sprint', '#00aa88'), +(29, 'Holiday', '#ffb6c1'), +(30, 'Ideas', '#a0522d'); + + + + + +-- ========== POSTS (40 rows) ========== +INSERT INTO posts (id, authorId, title, metaTitle, createdAt, updatedAt, publishedAt, content) +VALUES +-- 1..8: Persona Posts +(1, 1, 'My Portfolio Process', NULL, '2025-05-03 09:00:00', NULL, '2025-05-03 09:00:00', 'Phase-driven workflow: research → sketches → deliverables.'), +(2, 1, 'Daily Creative Habit Tips', NULL, '2025-06-01 08:30:00', NULL, '2025-06-01 08:30:00', 'How to keep a daily sketch habit as a freelancer.'), +(3, 2, 'Balancing Research & Teaching', NULL, '2024-09-01 10:00:00', NULL, '2024-09-01 10:00:00', 'Strategies to keep research progress while teaching.'), +(4, 2, 'Robust Estimators — Paper Plan', NULL, '2025-01-15 11:00:00', NULL, '2025-01-15 11:00:00', 'Outline and milestones for the statistical paper.'), +(5, 3, 'Ops: How We Triage Bugs', NULL, '2024-06-10 09:00:00', NULL, '2024-06-10 09:00:00', 'Triage rules and prioritization for sysadmins.'), +(6, 3, 'Community AMA: Office Hours', NULL, '2025-07-01 12:00:00', NULL, '2025-07-01 12:00:00', 'Monthly office hours: ask the founder anything.'), +(7, 4, 'Finance: Weekly P&L Snapshot', NULL, '2025-02-01 08:00:00', NULL, '2025-02-01 08:00:00', 'How I review revenue and expenses each morning.'), +(8, 4, 'Plan to Increase Profit 5%', NULL, '2025-02-02 09:00:00', NULL, '2025-02-02 09:00:00', 'Tactics and daily adjustments to reach profit targets.'), +-- 8..40: Additional realistic users posts to fill out dataset +(9, 5, 'Onboarding Checklist Improvements', NULL, '2025-03-01 09:30:00', NULL, '2025-03-01 09:30:00', 'Checklist to reduce time-to-value for new users.'), +(10, 6, 'Release Notes — April', NULL, '2025-04-02 10:00:00', NULL, '2025-04-02 10:00:00', 'What shipped in April.'), +(11, 7, 'Design Tokens Update', NULL, '2025-05-05 11:00:00', NULL, '2025-05-05 11:00:00', 'Updated spacing and color tokens.'), +(12, 8, 'Performance Benchmark Results', NULL, '2025-05-10 15:00:00', NULL, '2025-05-10 15:00:00', 'Benchmarks after DB and cache changes.'), +(13, 9, 'Mobile Layout Fixes', NULL, '2025-05-20 09:00:00', NULL, '2025-05-20 09:00:00', 'Fixes for iOS and Android UI issues.'), +(14, 10, 'How to Write Better Docs', NULL, '2025-06-01 09:00:00', NULL, '2025-06-01 09:00:00', 'Structure and tone guidance for docs.'), +(15, 11, 'QA: Running Regressions Locally', NULL, '2025-06-05 10:00:00', NULL, '2025-06-05 10:00:00', 'How to run regression suite on dev machine.'), +(16, 12, 'User Story Examples', NULL, '2025-06-10 09:00:00', NULL, '2025-06-10 09:00:00', 'Good templates for writing stories.'), +(17, 13, 'Analytics: Retention Cohort', NULL, '2025-07-01 08:00:00', NULL, '2025-07-01 08:00:00', 'Breaking down cohorts for retention analysis.'), +(18, 14, 'Security Checklist', NULL, '2025-07-05 09:00:00', NULL, '2025-07-05 09:00:00', 'Checklist for release security review.'), +(19, 15, 'Ideas Board: Feature Suggestions', NULL, '2025-07-08 10:00:00', NULL, '2025-07-08 10:00:00', 'Collecting feature ideas from customers.'), +(20, 16, 'Refactor: Notifications Service', NULL, '2025-07-10 11:00:00', NULL, '2025-07-10 11:00:00', 'Why the service was refactored.'), +(21, 17, 'Partner Integration Notes', NULL, '2025-07-14 09:30:00', NULL, '2025-07-14 09:30:00', 'Docs for partner connectors.'), +(22, 18, 'Holiday Release Plan', NULL, '2025-07-20 12:00:00', NULL, '2025-07-20 12:00:00', 'Planned features for holiday campaign.'), +(23, 19, 'Community Highlights', NULL, '2025-07-25 15:00:00', NULL, '2025-07-25 15:00:00', 'Top community contributions this month.'), +(24, 20, 'Daily Habit: 10-minute Write', NULL, '2025-07-28 07:00:00', NULL, '2025-07-28 07:00:00', 'How short daily habits build consistency.'), +(25, 21, 'DB Backup Best Practices', NULL, '2025-07-30 09:00:00', NULL, '2025-07-30 09:00:00', 'Backup strategies and retention'), +(26, 22, 'Integrations: Zapier Guide', NULL, '2025-08-01 10:00:00', NULL, '2025-08-01 10:00:00', 'Connect GoalFlow to Zapier.'), +(27, 23, 'UX Microcopy Examples', NULL, '2025-08-02 09:30:00', NULL, '2025-08-02 09:30:00', 'CTAs and label examples.'), +(28, 24, 'Sprint Plan Template', NULL, '2025-08-03 08:45:00', NULL, '2025-08-03 08:45:00', 'Template for sprint planning.'), +(29, 25, 'How to Set Priorities', NULL, '2025-08-04 10:20:00', NULL, '2025-08-04 10:20:00', 'Priority framework for teams.'), +(30, 26, 'Refactor: Jobs Queue', NULL, '2025-08-05 11:00:00', NULL, '2025-08-05 11:00:00', 'Improve reliability of background jobs.'), +(31, 27, 'Experiment Backlog', NULL, '2025-08-06 09:00:00', NULL, '2025-08-06 09:00:00', 'Ideas and experiments to test.'), +(32, 28, 'Design Roadmap Q3', NULL, '2025-08-07 10:00:00', NULL, '2025-08-07 10:00:00', 'Planned design initiatives for Q3.'), +(33, 29, 'Monthly Metrics - July', NULL, '2025-08-08 08:00:00', NULL, '2025-08-08 08:00:00', 'Key metrics for July.'), +(34, 30, 'Projection Charts', NULL, '2025-08-09 09:00:00', NULL, '2025-08-09 09:00:00', 'Charts to help long-term planning.'), +(35, 31, 'API Pagination Best Practices', NULL, '2025-08-09 11:00:00', NULL, '2025-08-09 11:00:00', 'Cursor vs page-number discussion.'), +(36, 32, 'Community Forum: Getting Started', NULL, '2025-08-09 12:00:00', NULL, '2025-08-09 12:00:00', 'How to engage in the forum constructively.'), +(37, 33, 'Retention Experiment Plan', NULL, '2025-08-09 13:00:00', NULL, '2025-08-09 13:00:00', 'Design for increasing retention.'), +(38, 34, 'Docs: Release Notes Style', NULL, '2025-08-09 14:00:00', NULL, '2025-08-09 14:00:00', 'Tone and structure guidance.'), +(39, 35, 'Bug Bash: How to Run One', NULL, '2025-08-09 15:00:00', NULL, '2025-08-09 15:00:00', 'Run an effective bug-finding session.'), +(40, 36, 'Community Spotlight: Volunteer', NULL, '2025-08-09 16:00:00', NULL, '2025-08-09 16:00:00', 'Highlight a community volunteer.'); + + + + + +-- ========== POSTS_TAGS (40 rows) ========== +INSERT INTO posts_tags (post_id, tag_id) +VALUES +-- 1..8: Persona Posts +(1, 14), +(2, 5), +(3, 4), +(4, 4), +(5, 12), +(6, 21), +(7, 25), +(8, 25), +-- 8..40: Additional realistic users posts to fill out dataset +(9, 21), +(10, 2), +(11, 14), +(12, 25), +(13, 23), +(14, 22), +(15, 15), +(16, 28), +(17, 13), +(18, 16), +(19, 30), +(20, 27), +(21, 18), +(22, 29), +(23, 21), +(24, 5), +(25, 6), +(26, 18), +(27, 9), +(28, 28), +(29, 8), +(30, 27), +(31, 20), +(32, 14), +(33, 13), +(34, 13), +(35, 11), +(36, 21), +(37, 20), +(38, 22), +(39, 15), +(40, 21); + + + + + +-- ========== POST_REPLY (150 rows) ========== +INSERT INTO post_reply (id, userId, postId, title, createdAt, publishedAt, content) +VALUES +(1, 5, 1, 'Nice structure — thanks!', '2025-05-03 10:02:00', '2025-05-03 10:02:00', 'This structure makes it easy to plan milestones. Do you use a template for case studies?'), +(2, 6, 1, 'Do you timebox phases?', '2025-05-03 10:18:00', NULL, 'Do you set fixed timeboxes for research vs. build, or let them flex by project?'), +(3, 1, 2, 'Love the habit idea', '2025-06-01 09:05:00', '2025-06-01 09:05:00', 'I’ve been doing a 20-minute sketch a day — your prompt list idea is helpful.'), +(4, 7, 3, 'Balancing tips are great', '2024-09-01 11:30:00', NULL, 'Scheduling fixed research blocks around class time is exactly what I needed.'), +(5, 8, 4, 'Suggestion for simulations', '2025-01-16 09:35:00', NULL, 'Consider adding a reproducibility subsection (random seeds, environment) to the methods.'), +(6, 3, 5, 'Nice triage flow', '2024-06-11 09:20:00', '2024-06-11 09:20:00', 'I like the three-bucket approach. Would it help to mark paid-customer bugs differently?'), +(7, 9, 6, 'Will the AMA be recorded?', '2025-07-01 13:05:00', NULL, 'I’m in a different timezone — will you post a recording for those who can’t attend?'), +(8, 10, 7, 'Recurring vs one-offs', '2025-02-01 09:20:00', NULL, 'When you snapshot P&L, do you separate recurring revenue from one-offs?'), +(9, 11, 8, 'Benchmark scripts share?', '2025-05-10 15:32:00', NULL, 'Can you share the scripts or command-line steps you used for the benchmarks?'), +(10, 12, 9, 'Thanks — mobile fix confirmed', '2025-05-20 10:05:00', NULL, 'Confirming the Android overlap was resolved — thanks for the quick turnaround.'), +(11, 13, 10, 'Docs snippet request', '2025-06-01 10:05:00', NULL, 'A short code example for the API export would help adoption — could you add one?'), +(12, 14, 11, 'QA tip on flakiness', '2025-06-05 11:05:00', NULL, 'Start by quarantining flaky tests and adding deterministic seeds where possible.'), +(13, 15, 12, 'Acceptance criteria suggestion', '2025-06-10 09:35:00', NULL, 'Nice examples — could you add an acceptance-criteria template?'), +(14, 16, 13, 'Try weekly buckets', '2025-07-01 09:35:00', NULL, 'Splitting retention into weekly cohorts often surfaces short-term changes faster.'), +(15, 17, 14, 'Dependency checklist', '2025-07-05 10:05:00', NULL, 'Add a dependency-review step to the security checklist (licenses + CVEs).'), +(16, 18, 15, 'Collect votes on features', '2025-07-08 11:10:00', NULL, 'Maybe add an upvote count so the team sees priority from users.'), +(17, 19, 16, 'Latency improved — nice', '2025-07-10 12:10:00', NULL, 'After the refactor we saw a 15% median latency improvement — well done!'), +(18, 20, 17, 'Connector SDK idea', '2025-07-14 10:20:00', NULL, 'A small SDK for partners would make integrations much easier to adopt.'), +(19, 21, 18, 'Coordinate marketing early', '2025-07-20 13:10:00', NULL, 'If this goes live during the holidays, loop in marketing three weeks ahead with assets.'), +(20, 22, 19, 'Great spotlight — thanks', '2025-07-25 16:10:00', NULL, 'Love seeing community contributions highlighted — keeps people engaged.'), +(21, 23, 20, 'Short habit advice', '2025-07-28 08:05:00', NULL, 'Short, daily practices scale — 10 minutes a day is a manageable start for most.'), +(22, 24, 21, 'Backup restore test', '2025-07-30 09:35:00', NULL, 'Don’t forget to test restores end-to-end on at least one backup monthly.'), +(23, 25, 22, 'Webhook example added', '2025-08-01 10:40:00', NULL, 'Thanks — the webhook transform example clarified a lot for our integration team.'), +(24, 26, 23, 'Shorter CTAs help', '2025-08-02 10:10:00', NULL, 'On mobile, concise CTAs have higher tap rates — consider 2–3 words max.'), +(25, 27, 24, 'Retro: add capacity row', '2025-08-03 09:05:00', NULL, 'Add engineer capacity to the sprint template so planning is realistic.'), +(26, 28, 25, 'Prioritize by impact', '2025-08-04 11:10:00', NULL, 'Triage bugs by customer impact (revenue + active users) to focus effort.'), +(27, 29, 26, 'Batching wins', '2025-08-05 11:45:00', NULL, 'Batching small jobs reduced overall queue pressure by 30% in our last rollout.'), +(28, 30, 27, 'Segment experiment cohorts', '2025-08-06 09:35:00', NULL, 'Recommend at least 3 cohorts to avoid noise.'), +(29, 31, 28, 'Accessibility note', '2025-08-07 10:45:00', NULL, 'When you publish the roadmap, add an accessibility row for each item to track compliance.'), +(30, 32, 29, 'Weekly metrics suggestion', '2025-08-08 08:15:00', NULL, 'Consider a short executive summary of the top 3 metrics so stakeholders can scan quickly.'), +(31, 33, 30, 'Export CSV please', '2025-08-09 09:05:00', NULL, 'CSV export would make sharing these charts much easier.'), +(32, 34, 31, 'Pagination question', '2025-08-09 11:10:00', NULL, 'Do you support cursor pagination for large datasets?'), +(33, 35, 32, 'Welcome tip', '2025-08-09 12:10:00', NULL, 'A pinned “how to get started” thread helps new users.'), +(34, 36, 33, 'Experiment tweak', '2025-08-09 13:20:00', NULL, 'Use longer test windows for retention signals to reduce noise.'), +(35, 37, 34, 'Docs style note', '2025-08-09 14:20:00', NULL, 'Keep release notes to 3–5 bullets for quick scanning.'), +(36, 38, 35, 'Bug bash logistics', '2025-08-09 15:20:00', NULL, 'Set teams and target areas, and publish a short how-to.'), +(37, 39, 36, 'Community shoutout', '2025-08-09 16:20:00', NULL, 'Thanks to volunteers for moderation — great work!'), +(38, 40, 37, 'Retention baseline', '2025-08-09 17:00:00', NULL, 'Use 30-day retention as a baseline and measure lift vs. it.'), +(39, 1, 3, 'Prof insight', '2024-09-02 09:00:00', NULL, 'Useful tips on balancing class prep and research.'), +(40, 2, 4, 'Paper plan review', '2025-01-16 10:20:00', NULL, 'I propose adding a bootstrap section to simulate robustness.'), +(41, 3, 5, 'Ops follow-up', '2024-06-12 09:30:00', NULL, 'Would you accept contributions on triage rules?'), +(42, 4, 7, 'Finance follow-up', '2025-02-01 10:00:00', NULL, 'I can share a P&L template for teams.'), +(43, 5, 8, 'Release question', '2025-04-03 10:30:00', NULL, 'Any breaking changes to note?'), +(44, 6, 9, 'Mobile confirm', '2025-05-21 11:20:00', NULL, 'iOS fix confirmed in next build.'), +(45, 7, 10, 'Docs sample added', '2025-06-02 10:05:00', NULL, 'Added sample snippet for the export endpoint.'), +(46, 8, 11, 'QA automation tip', '2025-06-06 12:00:00', NULL, 'Flaky tests: add consistent seeding.'), +(47, 9, 12, 'Perf configure', '2025-06-11 15:20:00', NULL, 'Benchmark harness simplified and scripts posted.'), +(48, 10, 13, 'Design tweak', '2025-07-02 09:00:00', NULL, 'Increased spacing on cards, looks cleaner.'), +(49, 11, 14, 'Security addendum', '2025-07-06 09:30:00', NULL, 'Add dependency checks to CI.'), +(50, 12, 15, 'Idea upvote', '2025-07-09 11:00:00', NULL, 'I upvote the experimental approach.'), +(51, 13, 16, 'Refactor check', '2025-07-11 13:00:00', NULL, 'Unit tests passed locally.'), +(52, 14, 17, 'Connector note', '2025-07-15 10:00:00', NULL, 'Connector docs include auth examples now.'), +(53, 15, 18, 'Holiday assets', '2025-07-21 12:15:00', NULL, 'Marketing assets are ready.'), +(54, 16, 19, 'Community thumbs', '2025-07-26 16:00:00', NULL, 'Nice community contributions — congrats!'), +(55, 17, 20, 'Habit reminder', '2025-07-29 08:00:00', NULL, 'Short habits compound over time.'), +(56, 18, 21, 'DB tip', '2025-07-31 09:10:00', NULL, 'Rotate backups monthly and test restores.'), +(57, 19, 22, 'Zapier sample', '2025-08-01 11:45:00', NULL, 'Webhook transform clarified. Thanks!'), +(58, 20, 23, 'Microcopy done', '2025-08-02 10:30:00', NULL, 'Finalized labels for the new UI.'), +(59, 21, 24, 'Sprint tweak', '2025-08-03 09:25:00', NULL, 'Added acceptance matrix.'), +(60, 22, 25, 'Priority dispatch', '2025-08-04 12:55:00', NULL, 'Assigned resources to top items.'), +(61, 23, 26, 'Queue resolved', '2025-08-05 14:20:00', NULL, 'Backlog reduced by 30%.'), +(62, 24, 27, 'Experiment notes', '2025-08-06 10:05:00', NULL, 'Updated experiment doc.'), +(63, 25, 28, 'Design sync 2', '2025-08-07 10:25:00', NULL, 'Sync completed; components updated.'), +(64, 26, 29, 'Metrics confirm', '2025-08-08 08:30:00', NULL, 'Checked alert thresholds.'), +(65, 27, 30, 'API sample 2', '2025-08-09 09:20:00', NULL, 'Cursor snippet added.'), +(66, 28, 31, 'Forum help', '2025-08-09 12:50:00', NULL, 'Guided new user to docs.'), +(67, 29, 32, 'Retention follow', '2025-08-09 14:10:00', NULL, 'Maybe segment by signup source.'), +(68, 30, 33, 'Metrics small', '2025-08-09 15:20:00', NULL, 'Include AOV in monthly report.'), +(69, 31, 34, 'Proj request', '2025-08-09 16:30:00', NULL, 'Add CSV export for projections.'), +(70, 32, 35, 'Pagination answer 2', '2025-08-09 17:40:00', NULL, 'Cursor recommended for large datasets.'), +(71, 33, 36, 'Forum welcome', '2025-08-09 18:00:00', NULL, 'Welcome to the community! Start with our guide.'), +(72, 34, 37, 'Experiment feedback', '2025-08-09 19:00:00', NULL, 'Consider alternative segmentations.'), +(73, 35, 38, 'Docs update', '2025-08-09 20:00:00', NULL, 'Updated release notes template.'), +(74, 36, 39, 'Bug bash', '2025-08-09 20:30:00', NULL, 'I will join the bug bash on Friday.'), +(75, 37, 40, 'Community thanks', '2025-08-09 21:00:00', NULL, 'Thanks for spotlighting volunteers.'), +(76, 1, 5, 'Ops QA note', '2024-06-12 11:00:00', NULL, 'Triage board looks good.'), +(77, 2, 1, 'Prof: design idea', '2025-05-04 08:00:00', NULL, 'Consider a rubric to evaluate case studies.'), +(78, 3, 6, 'AMA logistics', '2025-06-30 10:00:00', NULL, 'Will the AMA be recorded and posted?'), +(79, 4, 7, 'Finance note', '2025-02-02 08:15:00', NULL, 'Add a recurring revenue row to spreadsheet.'), +(80, 5, 8, 'Release praise', '2025-04-03 11:00:00', NULL, 'Nice incremental improvements.'), +(81, 6, 9, 'Mobile QA', '2025-05-21 11:00:00', NULL, 'Regression tests updated.'), +(82, 7, 10, 'Doc tweak', '2025-06-03 09:30:00', NULL, 'Clarified the CLI example.'), +(83, 8, 11, 'Design thumbs', '2025-06-07 10:10:00', NULL, 'Agree with updated tokens.'), +(84, 9, 12, 'Perf follow-up', '2025-06-11 16:00:00', NULL, 'Added profiling outputs.'), +(85, 10, 13, 'Mobile improvement', '2025-07-03 09:45:00', NULL, 'Resolved with CSS fix.'), +(86, 11, 14, 'Security note 2', '2025-07-07 11:00:00', NULL, 'Link added to checklist.'), +(87, 12, 15, 'Idea comment', '2025-07-09 12:00:00', NULL, 'Consider A/B testing variability.'), +(88, 13, 16, 'Refactor follow-up', '2025-07-11 14:00:00', NULL, 'Tests added for new queue.'), +(89, 14, 17, 'Connector example', '2025-07-15 11:15:00', NULL, 'OAuth example included.'), +(90, 15, 18, 'Holiday check', '2025-07-21 13:30:00', NULL, 'Assets QA passed.'), +(91, 16, 19, 'Community great', '2025-07-26 16:30:00', NULL, 'Congrats to contributors!'), +(92, 17, 20, 'Habit endorsement', '2025-07-29 08:30:00', NULL, 'Small daily habits add up.'), +(93, 18, 21, 'Backup confirm', '2025-07-31 10:00:00', NULL, 'Verified backups in staging.'), +(94, 19, 22, 'Zapier follow', '2025-08-01 11:45:00', NULL, 'Webhook transform clarified.'), +(95, 20, 23, 'Microcopy approve', '2025-08-02 10:15:00', NULL, 'Shortened label works well.'), +(96, 21, 24, 'Sprint approval', '2025-08-03 09:15:00', NULL, 'Template looks good.'), +(97, 22, 25, 'Priority ack', '2025-08-04 12:45:00', NULL, 'Escalated high-impact bugs.'), +(98, 23, 26, 'Queue note 2', '2025-08-05 14:00:00', NULL, 'Batching implemented.'), +(99, 24, 27, 'Experiment vote', '2025-08-06 09:50:00', NULL, 'I support the new CTA experiment.'), +(100, 25, 28, 'Design alignment', '2025-08-07 10:40:00', NULL, 'Tokens synchronized.'), +(101, 26, 29, 'Metrics ack', '2025-08-08 08:50:00', NULL, 'Added alert thresholds.'), +(102, 27, 30, 'API sample 3', '2025-08-09 09:30:00', NULL, 'Added more code examples.'), +(103, 28, 31, 'Forum help', '2025-08-09 12:50:00', NULL, 'Guided new user to docs.'), +(104, 29, 32, 'Retention adjust', '2025-08-09 14:20:00', NULL, 'Added secondary metric for retention.'), +(105, 30, 33, 'Monthly metric add', '2025-08-09 15:35:00', NULL, 'Year-over-year included.'), +(106, 31, 34, 'Proj export', '2025-08-09 16:45:00', NULL, 'CSV export endpoint added.'), +(107, 32, 35, 'Cursor example', '2025-08-09 17:50:00', NULL, 'Cursor pattern documented.'), +(108, 33, 36, 'Forum welcome 2', '2025-08-09 18:50:00', NULL, 'Pinned welcome message for newcomers.'), +(109, 34, 37, 'Experiment doc', '2025-08-09 19:50:00', NULL, 'Extended experiment plan.'), +(110, 35, 38, 'Docs finalize', '2025-08-09 20:50:00', NULL, 'Style guide included.'), +(111, 36, 39, 'Bug bash idea 2', '2025-08-09 20:40:00', NULL, 'Set a leaderboard to gamify participation.'), +(112, 37, 40, 'Community thanks 2', '2025-08-09 21:10:00', NULL, 'Great recognition of volunteers.'), +(113, 38, 1, 'Designer Q', '2025-05-04 09:30:00', NULL, 'Do you use a design brief template?'), +(114, 39, 2, 'Habit Q', '2025-06-02 09:15:00', NULL, 'How do you keep motivated long-term?'), +(115, 40, 3, 'Research Q', '2024-09-02 11:00:00', NULL, 'Do you share progress publicly or keep internal notes?'), +(116, 5, 4, 'Paper logistics', '2025-01-17 10:00:00', NULL, 'Add a timeline for experiments and checkpoints.'), +(117, 6, 5, 'Ops note 2', '2024-06-13 09:30:00', NULL, 'Consider a triage rotation to reduce bus-factor.'), +(118, 7, 6, 'AMA follow-up', '2025-07-02 12:00:00', NULL, 'Where will recordings be hosted?'), +(119, 8, 7, 'P&L question', '2025-02-03 09:00:00', NULL, 'Do you include deferred revenue in snapshots?'), +(120, 9, 8, 'Release follow-up', '2025-04-03 11:30:00', NULL, 'Noted, thanks!'), +(121, 10, 9, 'Mobile confirm', '2025-05-21 11:20:00', NULL, 'Confirmed fix on Android 13'), +(122, 11, 10, 'Docs approve', '2025-06-02 10:20:00', NULL, 'Docs ready for staging.'), +(123, 12, 11, 'QA note 2', '2025-06-06 12:10:00', NULL, 'Added regression tests.'), +(124, 13, 12, 'Perf follow 2', '2025-06-11 16:30:00', NULL, 'Profiling data uploaded.'), +(125, 14, 13, 'Design create', '2025-07-03 09:50:00', NULL, 'Created new token set.'), +(126, 15, 14, 'Security follow 2', '2025-07-07 11:30:00', NULL, 'CI security scan added.'), +(127, 16, 15, 'Idea follow', '2025-07-09 12:20:00', NULL, 'A/B test scheduled.'), +(128, 17, 16, 'Refactor notes 2', '2025-07-11 14:20:00', NULL, 'Queue service regression fixed.'), +(129, 18, 17, 'Connector patch', '2025-07-15 11:45:00', NULL, 'OAuth fix applied.'), +(130, 19, 18, 'Holiday assets 2', '2025-07-21 13:50:00', NULL, 'Final images exported.'), +(131, 20, 19, 'Community update', '2025-07-26 16:50:00', NULL, 'Adding more highlights next month.'), +(132, 21, 20, 'Habit follow', '2025-07-29 08:40:00', NULL, 'Streak maintained for 40 days.'), +(133, 22, 21, 'Backup note 2', '2025-07-31 10:10:00', NULL, 'Verified backups in staging.'), +(134, 23, 22, 'Zapier follow', '2025-08-01 12:00:00', NULL, 'Webhook transform clarified.'), +(135, 24, 23, 'Microcopy done', '2025-08-02 10:30:00', NULL, 'Finalized labels.'), +(136, 25, 24, 'Sprint tweak', '2025-08-03 09:25:00', NULL, 'Added acceptance matrix.'), +(137, 26, 25, 'Priority dispatch', '2025-08-04 12:55:00', NULL, 'Assigned resources to top items.'), +(138, 27, 26, 'Queue resolved', '2025-08-05 14:20:00', NULL, 'Backlog reduced by 30%.'), +(139, 28, 27, 'Experiment notes', '2025-08-06 10:05:00', NULL, 'Updated experiment doc.'), +(140, 29, 28, 'Design sync 2', '2025-08-07 10:25:00', NULL, 'Sync completed.'), +(141, 30, 29, 'Metrics confirm', '2025-08-08 08:30:00', NULL, 'Checked alert thresholds.'), +(142, 31, 30, 'API sample 3', '2025-08-09 09:30:00', NULL, 'Added more code examples.'), +(143, 32, 31, 'Forum onboarding', '2025-08-09 13:10:00', NULL, 'Welcome steps added to forum.'), +(144, 33, 32, 'Retention adjust', '2025-08-09 14:20:00', NULL, 'Added secondary metric for retention.'), +(145, 34, 33, 'Monthly metric add', '2025-08-09 15:35:00', NULL, 'Year-over-year included.'), +(146, 35, 34, 'Proj export', '2025-08-09 16:45:00', NULL, 'CSV export endpoint added.'), +(147, 36, 35, 'Cursor example', '2025-08-09 17:50:00', NULL, 'Cursor pattern documented.'), +(148, 37, 36, 'Forum welcome 3', '2025-08-09 18:50:00', NULL, 'Pinned welcome message.'), +(149, 38, 37, 'Experiment doc 2', '2025-08-09 19:50:00', NULL, 'Extended experiment plan.'), +(150, 39, 38, 'Docs finalize 2', '2025-08-09 20:50:00', NULL, 'Style guide included.'); + + + + + +-- ========== POST_REPLY_TAGS (150 rows) ========== +INSERT INTO post_reply_tags (post_reply_id, tag_id) +VALUES +(1, 14), +(2, 14), +(3, 5), +(4, 4), +(5, 4), +(6, 12), +(7, 21), +(8, 25), +(9, 25), +(10, 23), +(11, 22), +(12, 15), +(13, 28), +(14, 13), +(15, 16), +(16, 30), +(17, 27), +(18, 18), +(19, 29), +(20, 21), +(21, 5), +(22, 6), +(23, 18), +(24, 9), +(25, 28), +(26, 8), +(27, 27), +(28, 20), +(29, 14), +(30, 13), +(31, 13), +(32, 11), +(33, 21), +(34, 20), +(35, 22), +(36, 15), +(37, 21), +(38, 20), +(39, 4), +(40, 4), +(41, 12), +(42, 25), +(43, 2), +(44, 23), +(45, 22), +(46, 15), +(47, 25), +(48, 14), +(49, 16), +(50, 30), +(51, 27), +(52, 18), +(53, 29), +(54, 21), +(55, 5), +(56, 6), +(57, 18), +(58, 9), +(59, 28), +(60, 8), +(61, 27), +(62, 20), +(63, 14), +(64, 13), +(65, 11), +(66, 21), +(67, 20), +(68, 13), +(69, 13), +(70, 11), +(71, 21), +(72, 20), +(73, 22), +(74, 15), +(75, 21), +(76, 12), +(77, 14), +(78, 21), +(79, 25), +(80, 2), +(81, 23), +(82, 22), +(83, 14), +(84, 25), +(85, 23), +(86, 16), +(87, 30), +(88, 27), +(89, 18), +(90, 29), +(91, 21), +(92, 5), +(93, 6), +(94, 18), +(95, 9), +(96, 28), +(97, 8), +(98, 27), +(99, 20), +(100, 14), +(101, 13), +(102, 11), +(103, 21), +(104, 20), +(105, 13), +(106, 13), +(107, 11), +(108, 21), +(109, 20), +(110, 22), +(111, 15), +(112, 21), +(113, 14), +(114, 5), +(115, 4), +(116, 4), +(117, 12), +(118, 21), +(119, 25), +(120, 2), +(121, 23), +(122, 22), +(123, 15), +(124, 25), +(125, 14), +(126, 16), +(127, 30), +(128, 27), +(129, 18), +(130, 29), +(131, 21), +(132, 5), +(133, 6), +(134, 18), +(135, 9), +(136, 28), +(137, 8), +(138, 27), +(139, 20), +(140, 14), +(141, 13), +(142, 11), +(143, 21), +(144, 20), +(145, 13), +(146, 13), +(147, 11), +(148, 21), +(149, 20), +(150, 22); + + + + + +-- ========== USER_DATA (75 rows) ========== +INSERT INTO user_data (id, userId, location, deviceType, age, registeredAt, lastLogin, isActive) +VALUES +(1, 1, 'New York, NY', 'tablet', 23, '2025-03-01 09:00:00', '2025-08-09 09:00:00', 1), +(2, 2, 'Boston, MA', 'desktop', 52, '2020-09-01 08:00:00', '2025-08-08 18:00:00', 1), +(3, 3, 'Boston, MA', 'desktop', 22, '2019-06-01 10:00:00', '2025-08-09 20:30:00', 1), +(4, 4, 'Los Angeles, CA', 'desktop', 28, '2023-02-20 11:00:00', '2025-08-09 16:00:00', 1), +(5, 5, 'Seattle, WA', 'desktop', 29, '2024-04-10 09:00:00', '2025-08-07 14:00:00', 1), +(6, 6, 'Austin, TX', 'mobile', 31, '2024-05-05 10:00:00', '2025-08-05 09:00:00', 1), +(7, 7, 'Portland, OR', 'desktop', 27, '2024-06-01 11:00:00', '2025-08-06 11:00:00', 1), +(8, 8, 'Denver, CO', 'desktop', 34, '2024-07-10 08:30:00', '2025-08-04 10:00:00', 1), +(9, 9, 'Miami, FL', 'mobile', 25, '2024-08-15 09:00:00', '2025-08-03 12:00:00', 1), +(10, 10, 'Raleigh, NC', 'desktop', 29, '2024-09-01 09:30:00', '2025-08-02 10:00:00', 1), +(11, 11, 'Minneapolis, MN', 'desktop', 35, '2024-09-20 08:05:00', '2025-08-08 21:00:00', 1), +(12, 12, 'San Francisco,CA', 'desktop', 27, '2024-10-01 10:05:00', '2025-08-06 14:00:00', 1), +(13, 13, 'Los Angeles, CA', 'desktop', 36, '2024-11-01 08:30:00', '2025-08-05 10:45:00', 1), +(14, 14, 'Salt Lake City,UT', 'desktop', 30, '2025-01-05 13:00:00', '2025-07-30 10:00:00', 1), +(15, 15, 'Orlando, FL', 'mobile', 28, '2025-01-10 09:50:00', '2025-08-02 11:20:00', 1), +(16, 16, 'Philadelphia,PA', 'desktop', 34, '2025-01-15 17:15:00', '2025-08-06 14:05:00', 1), +(17, 17, 'Boise, ID', 'mobile', 27, '2025-01-20 12:00:00', '2025-07-25 09:00:00', 1), +(18, 18, 'Palo Alto, CA', 'desktop', 41, '2020-05-10 08:00:00', '2025-08-07 19:40:00', 1), +(19, 19, 'Cincinnati, OH', 'mobile', 26, '2025-02-10 10:20:00', '2025-07-29 14:00:00', 1), +(20, 20, 'Columbus, OH', 'desktop', 32, '2025-02-20 09:00:00', '2025-08-04 16:10:00', 1), +(21, 21, 'Las Vegas, NV', 'mobile', 29, '2025-02-25 11:30:00', '2025-07-26 11:30:00', 1), +(22, 22, 'Toronto, ON', 'desktop', 33, '2025-03-01 08:45:00', '2025-08-01 10:55:00', 1), +(23, 23, 'Montreal,QC', 'mobile', 24, '2025-03-05 14:00:00', '2025-07-27 09:15:00', 1), +(24, 24, 'Vancouver,BC', 'desktop', 35, '2025-03-10 09:30:00', '2025-08-03 12:00:00', 1), +(25, 25, 'Birmingham,AL', 'mobile', 22, '2025-03-15 11:00:00', '2025-07-25 08:00:00', 1), +(26, 26, 'Oakland, CA', 'desktop', 30, '2025-03-20 13:40:00', '2025-08-05 09:15:00', 1), +(27, 27, 'Syracuse, NY', 'mobile', 29, '2025-03-25 10:20:00', '2025-07-31 07:40:00', 1), +(28, 28, 'Rochester, NY', 'desktop', 31, '2025-03-30 09:00:00', '2025-08-02 08:20:00', 1), +(29, 29, 'Hartford, CT', 'mobile', 36, '2025-04-04 16:00:00', '2025-08-06 09:50:00', 1), +(30, 30, 'Nashville, TN', 'desktop', 28, '2025-04-09 08:10:00', '2025-08-04 07:00:00', 1), +(31, 31, 'Cleveland, OH', 'mobile', 33, '2025-04-14 10:40:00', '2025-08-03 09:40:00', 1), +(32, 32, 'Richmond, VA', 'desktop', 38, '2025-04-19 09:05:00', '2025-08-02 21:00:00', 1), +(33, 33, 'Tucson, AZ', 'mobile', 27, '2025-04-24 11:10:00', '2025-07-30 12:30:00', 1), +(34, 34, 'Santa Fe, NM', 'desktop', 35, '2025-04-29 09:45:00', '2025-08-01 13:30:00', 1), +(35, 35, 'Reno, NV', 'mobile', 26, '2025-05-04 10:00:00', '2025-07-29 09:00:00', 1), +(36, 36, 'Burlington, VT', 'desktop', 31, '2025-05-09 09:10:00', '2025-08-08 16:00:00', 1), +(37, 37, 'Ithaca, NY', 'desktop', 52, '2020-09-01 08:00:00', '2025-08-07 19:00:00', 1), +(38, 38, 'Boston, MA', 'desktop', 22, '2019-06-01 10:00:00', '2025-08-09 20:30:00', 1), +(39, 39, 'Los Angeles, CA', 'desktop', 28, '2023-02-20 11:00:00', '2025-08-09 16:00:00', 1), +(40, 40, 'London, UK', 'desktop', 33, '2024-02-01 09:00:00', '2025-08-08 12:00:00', 1), +(41, 1, 'New York, NY', 'desktop', 23, '2025-04-01 09:00:00', '2025-07-01 09:00:00', 1), +(42, 2, 'Boston, MA', 'desktop', 52, '2019-10-01 08:00:00', '2025-06-01 18:00:00', 1), +(43, 3, 'Boston, MA', 'desktop', 22, '2020-01-01 10:00:00', '2025-05-01 19:00:00', 1), +(44, 4, 'Los Angeles, CA', 'desktop', 28, '2023-03-01 11:00:00', '2025-04-01 16:00:00', 1), +(45, 5, 'Seattle, WA', 'desktop', 29, '2024-05-01 09:00:00', '2025-03-01 14:00:00', 1), +(46, 6, 'Austin, TX', 'mobile', 31, '2024-06-01 10:00:00', '2025-02-01 09:00:00', 1), +(47, 7, 'Portland, OR', 'desktop', 27, '2024-07-01 11:00:00', '2025-01-01 11:00:00', 1), +(48, 8, 'Denver, CO', 'desktop', 34, '2024-08-01 08:30:00', '2024-12-01 10:00:00', 1), +(49, 9, 'Miami, FL', 'mobile', 25, '2024-09-01 09:00:00', '2024-11-01 12:00:00', 1), +(50, 10, 'Raleigh, NC', 'desktop', 29, '2024-10-01 09:30:00', '2024-10-01 10:00:00', 1), +(51, 11, 'Minneapolis, MN', 'desktop', 35, '2024-10-15 08:05:00', '2024-09-01 21:00:00', 1), +(52, 12, 'San Francisco, CA', 'desktop', 27, '2024-11-15 10:05:00', '2024-08-01 14:00:00', 1), +(53, 13, 'Los Angeles, CA', 'desktop', 36, '2024-12-01 08:30:00', '2024-07-01 10:45:00', 1), +(54, 14, 'Salt Lake City,UT', 'desktop', 30, '2025-01-10 13:00:00', '2024-06-01 10:00:00', 1), +(55, 15, 'Orlando, FL', 'mobile', 28, '2025-01-15 09:50:00', '2024-05-01 11:20:00', 1), +(56, 16, 'Philadelphia,PA', 'desktop', 34, '2025-01-20 17:15:00', '2024-04-01 14:05:00', 1), +(57, 17, 'Boise, ID', 'mobile', 27, '2025-01-25 12:00:00', '2024-03-01 09:00:00', 1), +(58, 18, 'Palo Alto, CA', 'desktop', 41, '2019-05-10 08:00:00', '2024-02-01 19:40:00', 1), +(59, 19, 'Cincinnati, OH', 'mobile', 26, '2025-02-15 10:20:00', '2024-01-01 14:00:00', 1), +(60, 20, 'Columbus, OH', 'desktop', 32, '2025-02-25 09:00:00', '2023-12-01 16:10:00', 1), +(61, 21, 'Las Vegas, NV', 'mobile', 29, '2025-03-05 11:30:00', '2023-11-01 11:30:00', 1), +(62, 22, 'Toronto, ON', 'desktop', 33, '2025-03-10 08:45:00', '2023-10-01 10:55:00', 1), +(63, 23, 'Montreal,QC', 'mobile', 24, '2025-03-15 14:00:00', '2023-09-01 09:15:00', 1), +(64, 24, 'Vancouver,BC', 'desktop', 35, '2025-03-20 09:30:00', '2023-08-01 12:00:00', 1), +(65, 25, 'Birmingham,AL', 'mobile', 22, '2025-03-25 11:00:00', '2023-07-01 08:00:00', 1), +(66, 26, 'Oakland, CA', 'desktop', 30, '2025-03-30 13:40:00', '2023-06-01 09:15:00', 1), +(67, 27, 'Syracuse, NY', 'mobile', 29, '2025-04-04 10:20:00', '2023-05-01 07:40:00', 1), +(68, 28, 'Rochester, NY', 'desktop', 31, '2025-04-09 09:00:00', '2023-04-01 08:20:00', 1), +(69, 29, 'Hartford, CT', 'mobile', 36, '2025-04-14 16:00:00', '2023-03-01 09:50:00', 1), +(70, 30, 'Nashville, TN', 'desktop', 28, '2025-04-19 08:10:00', '2023-02-01 07:00:00', 1), +(71, 31, 'Cleveland, OH', 'mobile', 33, '2025-04-24 10:40:00', '2023-01-01 09:40:00', 1), +(72, 32, 'Richmond, VA', 'desktop', 38, '2025-04-29 09:05:00', '2022-12-01 21:00:00', 1), +(73, 33, 'Tucson, AZ', 'mobile', 27, '2025-05-04 11:10:00', '2022-11-01 12:30:00', 1), +(74, 34, 'Santa Fe, NM', 'desktop', 35, '2025-05-09 09:45:00', '2022-10-01 13:30:00', 1), +(75, 35, 'Reno, NV', 'mobile', 26, '2025-05-14 10:00:00', '2022-09-01 09:00:00', 1); + + + + + +-- ========== BUG_REPORTS (35 rows) ========== +INSERT INTO bug_reports (id, userId, title, metaTitle, description, dateReported, completed, priority) +VALUES +(1, 1, 'Portfolio image upload fails', NULL, 'PNG >5MB fails in uploader', '2025-06-10 11:20:00', 0, 'medium'), +(2, 2, 'Simulation reproducibility', NULL, 'Random seed mismatch across runs', '2025-01-20 09:30:00', 0, 'high'), +(3, 3, 'Spam in community forum', NULL, 'Automated spam posts bypass moderation', '2025-07-02 09:00:00', 0, 'high'), +(4, 4, 'CSV export timeouts', NULL, 'Large CSV exports time out at 30s', '2025-03-05 16:40:00', 0, 'medium'), +(5, 5, 'Mobile layout regression', NULL, 'Buttons overlap on Android 13', '2025-05-20 09:00:00', 0, 'high'), +(6, 6, 'DB deadlock on reports', NULL, 'Deadlocks in complex report queries', '2025-05-08 13:20:00', 0, 'critical'), +(7, 7, 'Notification duplication', NULL, 'Users receive duplicate notifications', '2025-04-28 09:00:00', 0, 'medium'), +(8, 8, 'Webhook retries too aggressive', NULL, 'Retries without exponential backoff', '2025-04-08 12:00:00', 0, 'high'), +(9, 9, 'Search indexing gap', NULL, 'New docs not appearing in search', '2025-04-20 16:00:00', 0, 'medium'), +(10, 10, 'iOS app cold start crash', NULL, 'Crash on cold start v1.2.0', '2025-05-02 07:50:00', 0, 'critical'), +(11, 11, 'Rate limit miscount', NULL, 'Rate limiter miscounts concurrent requests', '2025-04-11 09:30:00', 0, 'high'), +(12, 12, 'Missing translations', NULL, 'Some locales missing labels', '2025-03-12 08:10:00', 0, 'low'), +(13, 13, 'Memory leak in worker', NULL, 'Long-running worker grows memory', '2025-03-25 14:00:00', 0, 'critical'), +(14, 14, 'Image resize fails', NULL, 'Large images fail to resize', '2025-07-18 12:30:00', 0, 'medium'), +(15, 15, 'Permission elevation', NULL, 'Users see actions they should not', '2025-07-22 09:15:00', 0, 'critical'), +(16, 16, 'Queue backlog growth', NULL, 'Job queue length increasing', '2025-07-30 11:00:00', 0, 'high'), +(17, 17, 'Timezone display bug', NULL, 'Events showing UTC instead of local', '2025-05-26 08:05:00', 0, 'low'), +(18, 18, 'Schema migration failure', NULL, 'Migration fails on large tables', '2025-06-12 03:10:00', 0, 'critical'), +(19, 19, 'Broken download link', NULL, 'Downloads returning 403 for PDFs', '2025-08-02 09:30:00', 0, 'medium'), +(20, 20, 'Unexpected 500s after deploy', NULL, 'Release causing 500s on API', '2025-06-01 02:30:00', 1, 'critical'), +(21, 21, 'Attachments lost on save', NULL, 'Attachments not persistently saved', '2025-07-11 09:30:00', 0, 'medium'), +(22, 22, 'Webhook 500 errors', NULL, 'Third-party webhooks intermittently 500', '2025-07-06 12:00:00', 0, 'medium'), +(23, 23, 'Export pagination bug', NULL, 'Pagination broken for large exports', '2025-06-28 16:40:00', 0, 'medium'), +(24, 24, 'UI flicker on resize', NULL, 'Flicker when resizing browser window', '2025-08-01 08:45:00', 0, 'low'), +(25, 25, 'Login fails intermittently', NULL, 'Some users get login errors', '2025-07-20 13:00:00', 0, 'high'), +(26, 26, 'Performance regression', NULL, 'Search became slower after index change', '2025-07-26 10:00:00', 0, 'high'), +(27, 27, 'File uploads 413', NULL, 'Large uploads get 413 on certain proxies', '2025-05-18 10:30:00', 0, 'medium'), +(28, 28, 'Staging data mismatch', NULL, 'Staging uses old dataset', '2025-08-03 10:00:00', 0, 'low'), +(29, 29, 'Email template regression', NULL, 'Formatting broken in emails', '2025-08-04 13:30:00', 0, 'low'), +(30, 30, 'Policy link broken', NULL, 'Privacy policy link returns 404', '2025-06-06 09:45:00', 0, 'low'), +(31, 31, 'Duplicate uploads', NULL, 'Users can upload same file twice', '2025-06-16 09:30:00', 0, 'low'), +(32, 32, 'Search performance', NULL, 'Search slower after change', '2025-07-26 10:00:00', 0, 'high'), +(33, 33, 'Queue worker OOM', NULL, 'Worker runs out of memory on long jobs', '2025-07-30 11:30:00', 0, 'critical'), +(34, 34, 'Broken reset link', NULL, 'Password reset link invalid for some users', '2025-07-07 16:20:00', 0, 'low'), +(35, 35, 'Attachment encoding issue', NULL, 'Attachments corrupted on download', '2025-07-11 09:30:00', 0, 'medium'); + + + + + +-- ========== CONSISTENT_TASKS (35 rows) ========== +INSERT INTO consistent_tasks (id, userId, title, metaTitle, category, notes, createdAt) +VALUES +(1, 1, 'Daily Sketch', '20-min sketch', 'Personal', 'Daily creative sketch habit', '2025-05-01 08:00:00'), +(2, 2, 'Weekly Research Sync', NULL, 'Research', 'Sync with lab students', '2020-09-01 15:00:00'), +(3, 3, 'Daily Ops Triage', NULL, 'Ops', 'Triage new bug reports each morning', '2019-06-01 09:00:00'), +(4, 4, 'Daily P&L Snapshot', NULL, 'Finance', 'Quick morning revenue check', '2023-02-20 08:30:00'), +(5, 5, 'Weekly Team Retro', NULL, 'Meetings', 'Collect action items', '2024-03-01 09:00:00'), +(6, 6, 'DB Backups', NULL, 'Maintenance', 'Nightly DB backups', '2024-04-01 02:00:00'), +(7, 7, 'Code Review Hour', NULL, 'Engineering', 'Daily review window', '2024-04-15 14:00:00'), +(8, 8, 'Design Critique', NULL, 'Design', 'Weekly critique for mockups', '2024-05-01 11:00:00'), +(9, 9, 'Rotate Logs', NULL, 'DevOps', 'Rotate logs and archive', '2024-05-10 03:00:00'), +(10, 10,'Accessibility Audit', NULL, 'Design', 'Quarterly accessibility review', '2024-06-01 09:00:00'), +(11, 11,'Monthly Export', NULL, 'Analytics', 'Generate exports monthly', '2024-06-15 08:00:00'), +(12, 12,'Blog Post', NULL, 'Content', 'Write a helpful post per month', '2024-07-01 09:30:00'), +(13, 13,'Sprint Grooming', NULL, 'Product', 'Refine backlog for sprint', '2024-07-10 14:00:00'), +(14, 14,'Test Suite Maintenance', NULL, 'QA', 'Keep tests green', '2024-07-20 08:30:00'), +(15, 15,'Perf Benchmark', NULL, 'Performance', 'Monthly benchmarks', '2024-08-01 09:00:00'), +(16, 16,'Customer Interviews', NULL, 'Customer', 'Interview two customers per month', '2024-08-10 10:00:00'), +(17, 17,'Docs Review', NULL, 'Docs', 'Review docs before releases', '2024-09-01 09:00:00'), +(18, 18,'Release Checklist', NULL, 'Ops', 'Checklist for major releases', '2024-09-10 12:00:00'), +(19, 19,'Mentorship Hour', NULL, 'People', 'Weekly pairing with juniors', '2024-09-15 15:00:00'), +(20, 20,'Analytics Cleanup', NULL, 'Analytics', 'Archive old datasets quarterly', '2024-09-20 11:00:00'), +(21, 21,'Weekly Newsletter', NULL, 'Content', 'Compile highlights for team', '2024-09-25 09:00:00'), +(22, 22,'Run DB Migrations', NULL, 'Engineering', 'Schedule migration windows', '2024-10-01 02:00:00'), +(23, 23,'UX Study', NULL, 'Research', 'Monthly usability sessions', '2024-10-10 10:00:00'), +(24, 24,'Refactor Backlog', NULL, 'Engineering', 'Plan refactors into sprints', '2024-10-20 09:00:00'), +(25, 25,'Localization Check', NULL, 'Ops', 'Verify translations monthly', '2024-11-01 11:00:00'), +(26, 26,'Compliance Review', NULL, 'Legal', 'Quarterly compliance checks', '2024-11-15 13:00:00'), +(27, 27,'Feature Flagging', NULL, 'Product', 'Manage feature switches', '2024-12-01 09:30:00'), +(28, 28,'Idea Grooming', NULL, 'Product', 'Weekly idea triage', '2025-01-02 14:30:00'), +(29, 29,'Holiday Planning', NULL, 'People', 'Plan holiday schedule', '2025-01-05 16:00:00'), +(30, 30,'Retention Experiments', NULL, 'Growth', 'Run A/B retention experiments', '2025-01-10 10:00:00'), +(31, 31,'Ops Runbook Updates', NULL, 'DevOps', 'Keep runbooks current', '2025-01-15 09:00:00'), +(32, 32,'Community Events', NULL, 'Community', 'Plan meetups', '2025-01-20 18:00:00'), +(33, 33,'On-call Rotation', NULL, 'Support', 'Manage on-call schedule', '2025-01-25 08:00:00'), +(34, 34,'Design System Sync', NULL, 'Design', 'Sync tokens across apps', '2025-01-30 11:00:00'), +(35, 35,'Bug Triage Session', NULL, 'QA', 'Weekly bug triage', '2025-02-05 09:30:00'); + + + + + + +-- ========== DAILY_TASKS (75 rows) ========== +INSERT INTO daily_tasks (id, userId, title, metaTitle, status, completed, schedule, notes) +VALUES +(1, 1, 'Portfolio research', NULL, 'ON ICE', 1, '2025-05-05', 'Collect references and inspirations.'), +(2, 1, 'Sketch: 20-min prompt', NULL, 'ON ICE', 1, '2025-08-09', 'Daily creative habit.'), +(3, 1, 'Backlog: quick ideas', NULL, 'ON ICE', 0, '2025-08-10', 'Small ideas to store for later.'), +(4, 2, 'Research block (deep work)', NULL, 'ON ICE', 0, '2025-08-09', '2-hour deep work slot.'), +(5, 2, 'Course prep: lecture 3', NULL, 'ON ICE', 0, '2025-08-10', 'Slides and assignment due.'), +(6, 2, 'Simulation run', NULL, 'ON ICE', 0, '2025-08-11', 'Run Monte Carlo sims.'), +(7, 3, 'Triage new bug reports', NULL, 'ON ICE', 0, '2025-08-09', 'Prioritize enterprise customers.'), +(8, 3, 'Host office hours', NULL, 'ON ICE', 0, '2025-08-12', 'Monthly community event.'), +(9, 3, 'Verify nightly backups', NULL, 'ON ICE', 1, '2025-08-08', 'Confirm backup integrity.'), +(10, 4, 'Morning revenue snapshot', NULL, 'ON ICE', 1, '2025-08-09', 'Quick P&L check.'), +(11, 4, 'Assign subgoals to teams', NULL, 'ON ICE', 0, '2025-08-10', 'Notify project leads of deadlines.'), +(12, 4, 'Weekly cost review', NULL, 'ON ICE', 0, '2025-08-11', 'Check operating expenses.'), +(13, 5, 'Welcome email follow-up', NULL, 'ON ICE', 0, '2025-07-23', 'Follow up to earlier welcome email.'), +(14, 6, 'Run QA regression', NULL, 'ON ICE', 1, '2025-08-06', 'Full test run.'), +(15, 7, 'Design polish', NULL, 'ON ICE', 0, '2025-07-25', 'Fix spacing & tokens.'), +(16, 8, 'Perf benchmark', NULL, 'ON ICE', 0, '2025-07-27', 'Run benchmark suite.'), +(17, 9, 'Mobile smoke test', NULL, 'ON ICE', 1, '2025-07-31', 'Quick mobile checks.'), +(18, 10, 'Docs update', NULL, 'ON ICE', 0, '2025-07-28', 'Add missing examples.'), +(19, 11, 'Rotate logs', NULL, 'ON ICE', 1, '2025-08-02', 'Daily log rotation.'), +(20, 12, 'Plan next sprint', NULL, 'ON ICE', 0, '2025-07-30', 'Define sprint goal.'), +(21, 13, 'Order swag', NULL, 'ON ICE', 0, '2025-11-01', 'Finalize designs for swag.'), +(22, 14, 'Clean up tickets', NULL, 'ON ICE', 1, '2025-07-24', 'Close obsolete tickets.'), +(23, 15, 'Compliance check', NULL, 'ON ICE', 0, '2025-07-20', 'Gather logs for audit.'), +(24, 16, 'Analyze funnel', NULL, 'ON ICE', 0, '2025-07-27', 'Inspect conversion drops.'), +(25, 17, 'Follow-up customer', NULL, 'ON ICE', 0, '2025-07-29', 'Send next steps.'), +(26, 18, 'Read new paper', NULL, 'ON ICE', 0, '2025-08-07', 'Read one research paper.'), +(27, 19, 'Spec review', NULL, 'ON ICE', 0, '2025-07-30', 'Review spec draft.'), +(28, 20, 'Newsletter draft', NULL, 'ON ICE', 0, '2025-07-25', 'Draft weekly newsletter.'), +(29, 21, 'Index DB for search', NULL, 'ON ICE', 1, '2025-07-29', 'Ensure indexing complete.'), +(30, 22, 'Integration test', NULL, 'ON ICE', 1, '2025-07-29', 'All tests passing.'), +(31, 23, 'UX microcopy audit', NULL, 'ON ICE', 0, '2025-07-28', 'Audit CTAs.'), +(32, 24, 'Design sync', NULL, 'ON ICE', 1, '2025-07-25', 'Sync tokens.'), +(33, 25, 'Run compliance', NULL, 'ON ICE', 0, '2025-07-20', 'Run checks for compliance events.'), +(34, 26, 'Rotate API keys', NULL, 'ON ICE', 0, '2025-07-22', 'Rotate expired keys.'), +(35, 27, 'Refactor helper', NULL, 'ON ICE', 1, '2025-07-23', 'Simplify functions.'), +(36, 28, 'Prototype feature', NULL, 'ON ICE', 0, '2025-07-29', 'Prototype user flow.'), +(37, 29, 'High priority bug', NULL, 'ON ICE', 0, '2025-08-06', 'Escalated to oncall.'), +(38, 30, 'DB index task', NULL, 'ON ICE', 1, '2025-07-21', 'Add index on events.'), +(39, 31, 'Evening stretch', NULL, 'ON ICE', 1, '2025-08-05', '5 minutes.'), +(40, 32, 'Personal journal', NULL, 'ON ICE', 1, '2025-08-04', 'Daily reflections.'), +(41, 33, 'Read literature', NULL, 'ON ICE', 0, '2025-08-07', 'Read 1 paper.'), +(42, 34, 'Monitor dashboards', NULL, 'ON ICE', 1, '2025-08-06', 'Check for alerts.'), +(43, 35, 'Write specs', NULL, 'ACTIVE', 1, '2025-08-05', 'Draft v2 ready.'), +(44, 36, 'New user onboarding', NULL, 'ON ICE', 1, '2025-07-22', 'Email sent.'), +(45, 37, 'Lab meeting prep', NULL, 'ON ICE', 0, '2025-08-09', 'Prepare slides.'), +(46, 38, 'Monitor ops', NULL, 'ON ICE', 1, '2025-08-09', 'Check alerts and incidents.'), +(47, 39, 'Daily revenue check', NULL, 'ON ICE', 1, '2025-08-09', 'P&L quick check.'), +(48, 40, 'Forum moderation', NULL, 'ON ICE', 0, '2025-08-09', 'Review flagged posts.'), +(49, 5, 'Fix critical bug', NULL, 'ON ICE', 0, '2025-08-06', 'Assigned by QA'), +(50, 6, 'Smoke tests', NULL, 'ON ICE', 1, '2025-08-06', 'Pre-release check.'), +(51, 7, 'Spec notes', NULL, 'ON ICE', 0, '2025-07-30', 'Review draft.'), +(52, 8, 'Collect papers', NULL, 'ON ICE', 0, '2025-08-07', 'Collect relevant papers.'), +(53, 9, 'Rotate logs', NULL, 'ON ICE', 1, '2025-08-02', 'Rotate logs daily.'), +(54, 10, 'Add welcome step', NULL, 'ACTIVE', 1, '2025-07-22', 'Onboarding email sent.'), +(55, 11, 'Hotfix deploy', NULL, 'ON ICE', 0, '2025-08-06', 'Rollback ready.'), +(56, 12, 'Sprint planning', NULL, 'ON ICE', 0, '2025-07-25', 'Define sprint goal.'), +(57, 13, 'Order swag (design)', NULL, 'ON ICE', 0, '2025-11-01', 'Design finalized.'), +(58, 14, 'Cleanup backlog', NULL, 'ON ICE', 1, '2025-07-24', 'Close old issues.'), +(59, 15, 'Perf test run', NULL, 'ON ICE', 0, '2025-08-03', 'Run benchmarks.'), +(60, 16, 'Customer follow-ups', NULL, 'ON ICE', 0, '2025-07-29', 'Send next steps.'), +(61, 17, 'Update docs', NULL, 'ON ICE', 0, '2025-07-28', 'Add missing examples.'), +(62, 18, 'Run experiment', NULL, 'ON ICE', 1, '2025-07-30', 'Collect metrics daily.'), +(63, 19, 'Check mobile build', NULL, 'ON ICE', 1, '2025-07-31', 'Build passed.'), +(64, 20, 'Analyze funnel 2', NULL, 'ON ICE', 0, '2025-07-27', 'Conversion drop analysis.'), +(65, 21, 'Refactor small module', NULL, 'ON ICE', 0, '2025-07-26', 'Backend tidy-up.'), +(66, 22, 'Integration smoke', NULL, 'ON ICE', 1, '2025-07-29', 'All tests passing.'), +(67, 23, 'UX microcopy work', NULL, 'ON ICE', 0, '2025-07-28', 'Minor copy updates.'), +(68, 24, 'Design sync 2', NULL, 'ON ICE', 1, '2025-07-25', 'Sync with design system.'), +(69, 25, 'Compliance prep', NULL, 'ON ICE', 0, '2025-07-20', 'Gather logs.'), +(70, 26, 'Rotate keys 2', NULL, 'ON ICE', 0, '2025-07-22', 'Rotate API keys.'), +(71, 27, 'Refactor tests', NULL, 'ON ICE', 1, '2025-07-30', 'Simplify flaky tests.'), +(72, 28, 'Prototype signup', NULL, 'ON ICE', 0, '2025-07-31', 'Signup uplift prototype.'), +(73, 29, 'Stabilize RC', NULL, 'ON ICE', 0, '2025-07-21', 'Prepare release candidate.'), +(74, 30, 'Write first post', NULL, 'ON ICE', 0, '2025-07-23', 'Draft blog post.'), +(75, 31, 'Component cleanup', NULL, 'ON ICE', 0, '2025-07-25', 'Tidy components.'); + + + + + +-- ========== DAILY_TASKS_TAGS (75 rows) ========== +INSERT INTO daily_tasks_tags (daily_task_id, tag_id) +VALUES +(1, 14), +(2, 14), +(3, 5), +(4, 4), +(5, 21), +(6, 4), +(7, 12), +(8, 21), +(9, 6), +(10, 25), +(11, 25), +(12, 8), +(13, 21), +(14, 15), +(15, 14), +(16, 25), +(17, 23), +(18, 22), +(19, 6), +(20, 28), +(21, 29), +(22, 7), +(23, 26), +(24, 13), +(25, 17), +(26, 4), +(27, 2), +(28, 21), +(29, 10), +(30, 18), +(31, 9), +(32, 14), +(33, 26), +(34, 16), +(35, 27), +(36, 19), +(37, 8), +(38, 10), +(39, 5), +(40, 3), +(41, 4), +(42, 6), +(43, 2), +(44, 21), +(45, 4), +(46, 12), +(47, 25), +(48, 21), +(49, 1), +(50, 15), +(51, 2), +(52, 4), +(53, 12), +(54, 21), +(55, 1), +(56, 28), +(57, 29), +(58, 7), +(59, 25), +(60, 17), +(61, 22), +(62, 20), +(63, 24), +(64, 13), +(65, 11), +(66, 18), +(67, 9), +(68, 14), +(69, 26), +(70, 16), +(71, 27), +(72, 19), +(73, 8), +(74, 5), +(75, 11); + + + + + +-- ========== GOALS (35 rows) ========== +INSERT INTO goals (id, userId, title, notes, onIce, status, priority, createdAt, completedAt, completed, schedule) +VALUES +(1, 1, 'Portfolio redesign', 'Break into research > mockups > build > launch', 0, 'ACTIVE', 'high', '2025-05-01 09:00:00', NULL, 0, '2025-08-15'), +(2, 2, 'Statistical model paper', 'Robust estimators — simulations + paper', 0, 'ACTIVE', 'critical', '2024-10-01 09:00:00', NULL, 0, '2026-01-30'), +(3, 3, 'Platform uptime 99.95%', 'Improve CI/CD, monitoring, alerting', 0, 'ACTIVE', 'critical', '2019-06-01 08:00:00', NULL, 0, '2025-09-01'), +(4, 4, 'Increase company profit 5%', 'Coordinate experiments, retention, and cost cuts', 0, 'ACTIVE', 'critical', '2025-02-01 09:00:00', NULL, 0, '2025-12-31'), +(5, 5, 'Improve onboarding conversion', NULL, 0, 'ACTIVE', 'high', '2025-03-01 09:00:00', NULL, 0, '2025-06-01'), +(6, 6, 'Reduce API latency', NULL, 0, 'ACTIVE', 'critical', '2025-03-10 09:00:00', NULL, 0, '2025-05-01'), +(7, 7, 'Design system v2', NULL, 0, 'ACTIVE', 'high', '2025-03-20 11:00:00', NULL, 0, '2025-07-01'), +(8, 8, 'Analytics: funnel breakdown', NULL, 0, 'ACTIVE', 'medium', '2025-04-01 10:45:00', NULL, 0, '2025-06-30'), +(9, 9, 'Mobile QA ramp', NULL, 0, 'ACTIVE', 'high', '2025-04-10 09:00:00', NULL, 0, '2025-09-01'), +(10, 10, 'Frontend performance', NULL, 0, 'ACTIVE', 'high', '2025-04-20 09:00:00', NULL, 0, '2025-06-01'), +(11, 11, 'Security hardening', NULL, 0, 'ACTIVE', 'critical', '2025-04-30 09:30:00', NULL, 0, '2025-05-15'), +(12, 12, 'Docs completeness', NULL, 0, 'ACTIVE', 'medium', '2025-05-05 10:00:00', NULL, 0, '2025-06-20'), +(13, 13, 'Benchmark harness', NULL, 0, 'ACTIVE', 'high', '2025-05-12 09:00:00', NULL, 0, '2025-07-01'), +(14, 14, 'Bug backlog cleanup', NULL, 0, 'ACTIVE', 'medium', '2025-05-20 09:15:00', NULL, 0, '2025-06-05'), +(15, 15, 'Experiment: dashboard CTA', NULL, 0, 'ACTIVE', 'medium', '2025-05-25 10:00:00', NULL, 0, '2025-07-15'), +(16, 16, 'Integrations marketplace', NULL, 1, 'ON ICE', 'high', '2025-05-30 09:00:00', NULL, 0, NULL), +(17, 17, 'Refactor auth flow', NULL, 0, 'ACTIVE', 'high', '2025-06-01 10:30:00', NULL, 0, '2025-07-30'), +(18, 18, 'Customer success pilot', NULL, 0, 'ACTIVE', 'high', '2025-06-05 11:00:00', NULL, 0, '2025-08-01'), +(19, 19, 'Holiday release planning', NULL, 0, 'PLANNED', 'low', '2025-06-10 12:00:00', NULL, 0, '2025-11-15'), +(20, 20, 'Incident response improvements', NULL, 0, 'ACTIVE', 'critical', '2025-06-15 10:00:00', NULL, 0, '2025-07-01'), +(2, 21, 'Research ideas database', NULL, 0, 'ACTIVE', 'high', '2025-06-20 09:00:00', NULL, 0, NULL), +(22, 22, 'Automate deployments', NULL, 0, 'ACTIVE', 'high', '2025-06-25 09:00:00', NULL, 0, '2025-08-01'), +(23, 23, 'QA ramp for mobile', NULL, 0, 'ACTIVE', 'high', '2025-07-01 10:00:00', NULL, 0, '2025-09-01'), +(24, 24, 'Web performance focus', NULL, 0, 'ACTIVE', 'critical', '2025-07-05 09:30:00', NULL, 0, '2025-09-15'), +(25, 25, 'Compliance readiness', NULL, 0, 'PLANNED', 'critical', '2025-07-10 11:00:00', NULL, 0, '2025-12-01'), +(26, 26, 'Refactor logging', NULL, 0, 'ACTIVE', 'medium', '2025-07-12 09:00:00', NULL, 0, '2025-08-31'), +(27, 27, 'Idea harvest', NULL, 0, 'ACTIVE', 'low', '2025-07-15 14:00:00', NULL, 0, NULL), +(28, 28, 'Prototype feature A', NULL, 0, 'ACTIVE', 'low', '2025-07-16 09:00:00', NULL, 0, '2025-07-30'), +(29, 29, 'Stabilize release', NULL, 0, 'ACTIVE', 'high', '2025-07-20 09:00:00', NULL, 0, '2025-08-20'), +(30, 30, 'Personal blog launch', NULL, 0, 'ACTIVE', 'low', '2025-07-22 10:00:00', NULL, 0, '2025-08-10'), +(31, 31, 'Front-end mastery', NULL, 0, 'ACTIVE', 'medium', '2025-07-24 09:00:00', NULL, 0, '2025-10-01'), +(2, 32, 'Experiment tracking', NULL, 0, 'ACTIVE', 'high', '2025-07-26 09:00:00', NULL, 0, '2025-09-01'), +(33, 33, 'Secret scanning', NULL, 0, 'ACTIVE', 'critical', '2025-07-28 14:00:00', NULL, 0, '2025-08-15'), +(34, 34, 'UX microcopy baseline', NULL, 0, 'ACTIVE', 'low', '2025-07-30 09:00:00', NULL, 0, '2025-08-15'), +(35, 35, 'Partner onboarding docs', NULL, 0, 'ACTIVE', 'high', '2025-08-01 10:00:00', NULL, 0, '2025-09-15'); + + + + + +-- ========== GOALS_TAGS (35 rows) ========== +INSERT INTO goals_tags (goal_id, tag_id) +VALUES +(1, 14), +(2, 4), +(3, 6), +(4, 25), +(5, 21), +(6, 25), +(7, 14), +(8, 13), +(9, 23), +(10, 11), +(11, 16), +(12, 22), +(13, 25), +(14, 3), +(15, 20), +(16, 18), +(17, 7), +(18, 17), +(19, 29), +(20, 1), +(21, 4), +(22, 28), +(23, 8), +(24, 24), +(25, 26), +(26, 27), +(27, 30), +(28, 19), +(29, 2), +(30, 5), +(31, 11), +(32, 20), +(33, 16), +(34, 9), +(35, 18); + + + + + +-- ========== SUBGOALS (75 rows) ========== +INSERT INTO subgoals (id, goalsId, title, notes, status, createdAt, completedAt, completed, schedule) +VALUES +(1, 1, 'Collect 8 case studies', 'Select projects and write case studies', 'ACTIVE', '2025-05-03 10:00:00', NULL, 0, '2025-06-10'), +(2, 2, 'Write literature review', 'Collect related papers', 'ACTIVE', '2024-10-10 10:00:00', NULL, 0, '2025-02-01'), +(3, 4, 'A/B retention experiments', 'Run 3 experiments to increase retention', 'ACTIVE', '2025-02-10 09:00:00', NULL, 0, '2025-06-30'), +(4, 1, 'Design mockups mobile+web', NULL, 'ACTIVE', '2025-06-12 09:00:00', NULL, 0, '2025-07-05'), +(5, 1, 'Launch portfolio site', NULL, 'PLANNED', '2025-07-10 09:00:00', NULL, 0, '2025-08-15'), +(6, 2, 'Run simulations', NULL, 'ACTIVE', '2025-01-05 09:30:00', NULL, 0, '2025-04-01'), +(7, 2, 'Draft methods section', NULL, 'ACTIVE', '2025-03-01 09:00:00', NULL, 0, '2025-08-01'), +(8, 3, 'Add p95 latency alerting', NULL, 'ACTIVE', '2019-07-01 09:00:00', NULL, 0, '2025-03-15'), +(9, 3, 'Improve CI pipeline', NULL, 'ACTIVE', '2020-01-01 12:00:00', NULL, 0, '2025-06-01'), +(10, 3, 'Create runbook for incidents', NULL, 'ACTIVE', '2024-06-01 10:00:00', NULL, 0, '2025-07-01'), +(11, 4, 'Cut ops cost 2%', NULL, 'ACTIVE', '2025-03-01 09:00:00', NULL, 0, '2025-09-30'), +(12, 4, 'Reduce churn 1%', NULL, 'ACTIVE', '2025-02-15 09:00:00', NULL, 0, '2025-06-30'), +(13, 5, 'Add onboarding checklist', NULL, 'ACTIVE', '2025-03-02 09:00:00', NULL, 0, '2025-05-01'), +(14, 5, 'In-app walkthrough', NULL, 'ACTIVE', '2025-03-05 09:30:00', NULL, 0, '2025-04-15'), +(15, 6, 'Introduce Redis cache', NULL, 'ACTIVE', '2025-03-14 09:00:00', NULL, 0, '2025-04-01'), +(16, 6, 'Add query plan checks', NULL, 'ON ICE', '2025-03-20 08:00:00', NULL, 0, NULL), +(17, 7, 'Create component library', NULL, 'ACTIVE', '2025-03-22 11:00:00', NULL, 0, '2025-06-01'), +(18, 8, 'Add funnel viz', NULL, 'ACTIVE', '2025-03-12 09:00:00', NULL, 0, '2025-05-01'), +(19, 9, 'Increase mobile tests', NULL, 'ACTIVE', '2025-04-02 10:00:00', NULL, 0, '2025-06-01'), +(20, 10, 'Lazy-load charts', NULL, 'ACTIVE', '2025-04-03 09:20:00', NULL, 0, '2025-04-20'), +(21, 11, 'Secrets scan in CI', NULL, 'ACTIVE', '2025-04-11 10:00:00', NULL, 0, '2025-04-30'), +(22, 12, 'Bulk export sample', NULL, 'ACTIVE', '2025-04-21 12:00:00', NULL, 0, '2025-05-15'), +(23, 13, 'Benchmark harness v1', NULL, 'ACTIVE', '2025-04-30 09:00:00', NULL, 0, '2025-06-15'), +(24, 14, 'Close stale bugs', NULL, 'ACTIVE', '2025-05-06 09:00:00', '2025-06-06 12:00:00', 1, '2025-06-06'), +(25, 15, 'Create CTA variants', NULL, 'ACTIVE', '2025-05-13 10:00:00', NULL, 0, '2025-06-20'), +(26, 16, 'Alpha connector: Stripe', NULL, 'ON ICE', '2025-05-22 11:30:00', NULL, 0, NULL), +(27, 17, 'Migrate token store', NULL, 'ACTIVE', '2025-05-26 09:00:00', NULL, 0, '2025-06-10'), +(28, 18, 'Recruit pilot customers', NULL, 'ACTIVE', '2025-06-02 10:00:00', NULL, 0, '2025-07-01'), +(29, 19, 'Holiday campaign assets', NULL, 'PLANNED', '2025-06-11 12:00:00', NULL, 0, '2025-10-01'), +(30, 20, 'Run incident drills', NULL, 'ACTIVE', '2025-06-16 10:00:00', NULL, 0, '2025-07-01'), +(31, 21, 'ML literature review', NULL, 'ON ICE', '2025-06-22 09:00:00', NULL, 0, NULL), +(32, 22, 'Blue-green deploy test', NULL, 'ACTIVE', '2025-06-27 09:30:00', NULL, 0, '2025-07-15'), +(33, 23, 'Mobile benchmark tests', NULL, 'ACTIVE', '2025-07-03 10:00:00', NULL, 0, '2025-08-01'), +(34, 24, 'Split JS bundles', NULL, 'ACTIVE', '2025-07-07 09:00:00', NULL, 0, '2025-08-01'), +(35, 25, 'Map SOC2 controls', NULL, 'PLANNED', '2025-07-12 11:00:00', NULL, 0, '2025-09-01'), +(36, 26, 'Centralize logging', NULL, 'ACTIVE', '2025-07-14 09:00:00', NULL, 0, '2025-08-15'), +(37, 27, 'Collect refactor candidates', NULL, 'ACTIVE', '2025-07-18 14:30:00', NULL, 0, '2025-07-31'), +(38, 28, 'Harvest ideas from support', NULL, 'ACTIVE', '2025-07-19 10:00:00', NULL, 0, NULL), +(39, 29, 'Outline holiday offers', NULL, 'PLANNED', '2025-07-22 16:00:00', NULL, 0, '2025-10-01'), +(40, 30, 'Write blog draft #1', NULL, 'ACTIVE', '2025-07-24 10:30:00', NULL, 0, '2025-08-01'), +(41, 31, 'Component docs', NULL, 'ACTIVE', '2025-07-26 09:30:00', NULL, 0, '2025-08-15'), +(42, 32, 'Metric naming', NULL, 'ACTIVE', '2025-07-28 09:30:00', NULL, 0, '2025-08-30'), +(43, 33, 'Add pre-commit hook', NULL, 'ACTIVE', '2025-07-30 09:10:00', NULL, 0, '2025-08-05'), +(44, 34, 'CTA audit v2', NULL, 'ACTIVE', '2025-08-01 09:00:00', NULL, 0, '2025-08-12'), +(45, 35, 'Partner doc outline', NULL, 'ACTIVE', '2025-08-02 10:15:00', NULL, 0, '2025-08-20'), +(46, 36, 'Finalize tags API', NULL, 'ACTIVE', '2025-01-12 09:00:00', NULL, 0, '2025-02-15'), +(47, 37, 'Cache warming strategy', NULL, 'ACTIVE', '2025-01-18 09:30:00', NULL, 0, '2025-02-25'), +(48, 38, 'Experiment instrumentation', NULL, 'ACTIVE', '2025-02-06 10:00:00', NULL, 0, '2025-03-20'), +(49, 39, 'Flaky tests triage', NULL, 'ACTIVE', '2025-02-20 09:00:00', NULL, 0, '2025-03-30'), +(50, 40, 'Walkthrough script', NULL, 'ACTIVE', '2025-03-08 09:00:00', NULL, 0, '2025-04-10'), +(51, 1, 'Portfolio final review', NULL, 'ACTIVE', '2025-05-01 09:00:00', NULL, 0, '2025-08-10'), +(52, 2, 'Finalize simulation results', NULL, 'ACTIVE', '2025-05-10 09:00:00', NULL, 0, '2025-08-15'), +(53, 3, 'Run emergency drill', NULL, 'ACTIVE', '2025-05-15 09:00:00', NULL, 0, '2025-08-20'), +(54, 4, 'Monthly revenue report', NULL, 'ACTIVE', '2025-05-20 09:00:00', NULL, 0, '2025-08-31'), +(55, 5, 'New user checklist', NULL, 'ACTIVE', '2025-05-25 09:00:00', NULL, 0, '2025-09-01'), +(56, 6, 'DB optimization', NULL, 'ACTIVE', '2025-05-30 09:00:00', NULL, 0, '2025-09-10'), +(57, 7, 'Design tokens audit', NULL, 'ACTIVE', '2025-06-04 09:00:00', NULL, 0, '2025-09-15'), +(58, 8, 'Data hygiene', NULL, 'ACTIVE', '2025-06-09 09:00:00', NULL, 0, '2025-09-20'), +(59, 9, 'Mobile regression', NULL, 'ACTIVE', '2025-06-14 09:00:00', NULL, 0, '2025-09-25'), +(60, 10, 'Docs search index', NULL, 'ACTIVE', '2025-06-19 09:00:00', NULL, 0, '2025-09-30'), +(61, 11, 'QA tech debt', NULL, 'ACTIVE', '2025-06-24 09:00:00', NULL, 0, '2025-10-05'), +(62, 12, 'Sprint retrospective', NULL, 'ACTIVE', '2025-06-29 09:00:00', NULL, 0, '2025-10-10'), +(63, 13, 'Perf baseline run', NULL, 'ACTIVE', '2025-07-04 09:00:00', NULL, 0, '2025-10-15'), +(64, 14, 'Bug closure campaign', NULL, 'ACTIVE', '2025-07-09 09:00:00', NULL, 0, '2025-10-20'), +(65, 15, 'CTA experiment', NULL, 'ACTIVE', '2025-07-14 09:00:00', NULL, 0, '2025-10-25'), +(66, 16, 'Marketplace doc draft', NULL, 'ACTIVE', '2025-07-19 09:00:00', NULL, 0, '2025-11-01'), +(67, 17, 'Auth refactor follow-up', NULL, 'ACTIVE', '2025-07-24 09:00:00', NULL, 0, '2025-11-05'), +(68, 18, 'Customer check-ins', NULL, 'ACTIVE', '2025-07-29 09:00:00', NULL, 0, '2025-11-10'), +(69, 19, 'Holiday brief', NULL, 'ACTIVE', '2025-08-03 09:00:00', NULL, 0, '2025-11-15'), +(70, 20, 'Runbook test', NULL, 'ACTIVE', '2025-08-08 09:00:00', NULL, 0, '2025-11-20'), +(71, 21, 'Add idea to DB', NULL, 'ACTIVE', '2025-08-13 09:00:00', NULL, 0, '2025-11-25'), +(72, 22, 'Docs examples update', NULL, 'ACTIVE', '2025-08-18 09:00:00', NULL, 0, '2025-11-30'), +(73, 23, 'UX follow up', NULL, 'ACTIVE', '2025-08-23 09:00:00', NULL, 0, '2025-12-05'), +(74, 24, 'Design tokens final', NULL, 'ACTIVE', '2025-08-28 09:00:00', NULL, 0, '2025-12-10'), +(75, 25, 'Compliance doc prep', NULL, 'ACTIVE', '2025-09-02 09:00:00', NULL, 0, '2025-12-20'); + + + + + +SET FOREIGN_KEY_CHECKS = 1; \ No newline at end of file diff --git a/database-files/03_gflow_smalldata.sql b/database-files/03_gflow_smalldata.sql new file mode 100644 index 0000000000..bbed710c08 --- /dev/null +++ b/database-files/03_gflow_smalldata.sql @@ -0,0 +1,99 @@ +USE `global-GoalFlow`; +SET FOREIGN_KEY_CHECKS = 0; + + +-- 1) USERS +INSERT INTO users (id, firstName, middleName, lastName, phoneNumber, email, role, planType, manages) +VALUES + (1, 'Alice', 'B.', 'Cooper', '555-0101', 'alice@example.com', 'admin', 'premium', NULL), + (2, 'Bob', NULL, 'Dylan', '555-0202', 'bob@example.com', 'manager', 'standard', 1), + (3, 'Cara', 'M.', 'Evanston', '555-0303', 'cara@example.com', 'user', 'free', 2), + (4, 'Dan', NULL, 'Fisher', '555-0404', 'dan@example.com', 'user', 'free', 2), + (5, 'Eve', 'L.', 'Garcia', '555-0505', 'eve@example.com', 'user', 'standard', 1); + + +-- 2) TAGS +INSERT INTO tags (id, name, color) +VALUES + (1, 'Urgent', '#ff0000'), + (2, 'Feature', '#00ff00'), + (3, 'Bug', '#0000ff'), + (4, 'Research', '#ffff00'), + (5, 'Personal', '#ff00ff'); + + +-- 3) POSTS +INSERT INTO posts (id, authorId, title, metaTitle, createdAt, updatedAt, publishedAt, slug, content, tag) +VALUES + (1, 1, 'Welcome to GoalFlow', 'Intro to GoalFlow', '2025-07-01 09:00:00', '2025-07-01 10:00:00', '2025-07-01 10:00:00', 'welcome-to-goalflow', 'This is our first post!', 2), + (2, 2, 'New Feature: Tags', 'Using tags', '2025-07-05 14:30:00', NULL, NULL, 'feature-tags', 'Tags help you organize.', 2), + (3, 3, 'Bug Report Workflow', NULL, '2025-07-10 11:15:00', '2025-07-12 08:45:00','2025-07-12 08:45:00','bug-report-workflow', 'How to report bugs.', 3), + (4, 4, 'Daily Tasks Tips', NULL, '2025-07-15 16:00:00', NULL, NULL, 'daily-tasks-tips', 'Stay on track every day.', 4); + + +-- 4) POST_REPLY +INSERT INTO post_reply (id, userId, postId, title, createdAt, publishedAt, content, tag) +VALUES + (1, 3, 1, 'Thanks Alice!', '2025-07-01 11:00:00', '2025-07-01 11:00:00', 'Great intro!', 5), + (2, 4, 1, 'Excited!', '2025-07-01 11:10:00', NULL, 'Can’t wait to use it.', 1), + (3, 5, 2, 'Question', '2025-07-06 09:20:00', '2025-07-06 09:20:00', 'How do tags work?', 4), + (4, 1, 3, 'Re: Bug Workflow','2025-07-12 09:00:00','2025-07-12 09:00:00','Nice guide.', 3), + (5, 3, 4, 'Tip', '2025-07-15 17:00:00', NULL, 'I schedule mine nightly.', 4), + (6, 2, 4, 'Thanks Cara', '2025-07-15 17:30:00', '2025-07-15 17:30:00','Good tip!', 5); + + +-- 5) CONSISTENT_TASKS +INSERT INTO consistent_tasks (id, userId, title, metaTitle, slug, category, notes) +VALUES + (1, 1, 'Daily stand-up', 'Stand-up meeting','daily-standup', 'Meetings', '15-minute team check-in.'), + (2, 3, 'Backup database', NULL, 'backup-database', 'Maintenance','Backup at 2 AM daily.'), + (3, 5, 'Review roadmap', NULL, 'review-roadmap', 'Planning', 'Monthly goals review.'); + + +-- 6) DAILY_TASKS +INSERT INTO daily_tasks (id, userId, tagId, title, metaTitle, slug, status, completed, schedule, notes) +VALUES + (1, 2, 1, 'Fix critical bug', NULL, 'fix-critical-bug', 0, 0, '2025-08-06', 'Assigned by QA'), + (2, 3, 5, 'Meditate', NULL, 'meditate', 0, 1, '2025-08-05', '10 minutes'), + (3, 4, 2, 'Write specs', NULL, 'write-specs', 1, 1, '2025-08-05', 'Draft v1'), + (4, 5, 4, 'Literature review', NULL, 'literature-review', 0, 0, '2025-08-07', 'Collect papers'), + (5, 1, 3, 'Verify bug fix', NULL, 'verify-bug-fix', 0, 0, '2025-08-06', 'Test in staging'); + + +-- 7) GOALS +INSERT INTO goals (id, userId, tagId, title, notes, onIce, status, priority, createdAt, completedAt, completed, schedule) +VALUES + (1, 1, 2, 'Implement tagging', 'Allow custom tags', 0, 'ACTIVE', 1, '2025-06-20 12:00:00', NULL, 0, '2025-08-01'), + (2, 2, 3, 'Stabilize releases', 'Reduce hot-fixes', 0, 'ACTIVE', 2, '2025-05-15 08:30:00', NULL, 0, '2025-09-01'), + (3, 3, 4, 'Publish whitepaper', NULL, 1, 'ON ICE', 3, '2025-07-01 10:00:00', NULL, 0, NULL), + (4, 5, 5, 'Personal blog', 'Write first post', 0, 'ACTIVE', 4, '2025-07-10 15:45:00', NULL, 0, '2025-08-10'); + + +-- 8) SUBGOALS +INSERT INTO subgoals (id, goalsId, title, notes, status, createdAt, completedAt, completed, schedule) +VALUES + (1, 1, 'DB schema for tags', 'Add color field', 'ACTIVE', '2025-06-21 09:00:00', NULL, 0, '2025-06-25'), + (2, 1, 'UI for tag selector', NULL, 'ON ICE', '2025-06-26 14:00:00', NULL, 0, NULL), + (3, 2, 'Automate tests', NULL, 'ACTIVE', '2025-05-16 11:00:00', '2025-07-01 17:00:00', 1, '2025-06-30'), + (4, 4, 'Draft first post', NULL, 'ACTIVE', '2025-07-11 10:00:00', NULL, 0, '2025-07-15'); + + +-- 9) BUG_REPORTS +INSERT INTO bug_reports (id, userId, title, metaTitle, slug, description, dateReported, status, priority) +VALUES + (1, 3, 'Login fails', NULL, 'login-fails', 'Cannot login with valid creds', '2025-07-20 13:00:00', 0, 1), + (2, 4, 'UI glitch on mobile',NULL, 'ui-glitch-mobile', 'Buttons overlap on small screens','2025-07-22 09:15:00', 0, 2), + (3, 5, 'Slow report gen', NULL, 'slow-report-gen', 'Dashboard reports take >30s', '2025-07-25 16:40:00', 0, 3); + + +-- 10) USER_DATA +INSERT INTO user_data (id, userId, location, totalTime, deviceType, age, registeredAt, lastLogin, isActive, postCount) +VALUES + (1, 1, 'New York, NY', 1200, 'desktop', 29, '2025-01-05 08:00:00', '2025-08-04 18:30:00', 1, 2), + (2, 2, 'Chicago, IL', 800, 'mobile', 35, '2025-02-10 12:15:00', '2025-08-02 20:45:00', 1, 2), + (3, 3, 'Seattle, WA', 600, 'desktop', 27, '2025-03-15 14:20:00', '2025-08-03 09:10:00', 1, 3), + (4, 4, 'Austin, TX', 450, 'tablet', 31, '2025-04-01 10:05:00', '2025-08-01 11:00:00', 1, 1), + (5, 5, 'Boston, MA', 300, 'mobile', 22, '2025-05-20 16:45:00', '2025-07-30 17:25:00', 1, 1); + + +SET FOREIGN_KEY_CHECKS = 1; \ No newline at end of file diff --git a/database-files/ngo_db.sql b/database-files/ngo_db.sql deleted file mode 100644 index 7d797f9c70..0000000000 --- a/database-files/ngo_db.sql +++ /dev/null @@ -1,71 +0,0 @@ -DROP DATABASE IF EXISTS ngo_database; -CREATE DATABASE IF NOT EXISTS ngo_database; - -USE ngo_database; - - -CREATE TABLE IF NOT EXISTS WorldNGOs ( - NGO_ID INT AUTO_INCREMENT PRIMARY KEY, - Name VARCHAR(255) NOT NULL, - Country VARCHAR(100) NOT NULL, - Founding_Year INTEGER, - Focus_Area VARCHAR(100), - Website VARCHAR(255) -); - -CREATE TABLE IF NOT EXISTS Projects ( - Project_ID INT AUTO_INCREMENT PRIMARY KEY, - Project_Name VARCHAR(255) NOT NULL, - Focus_Area VARCHAR(100), - Budget DECIMAL(15, 2), - NGO_ID INT, - Start_Date DATE, - End_Date DATE, - FOREIGN KEY (NGO_ID) REFERENCES WorldNGOs(NGO_ID) -); - -CREATE TABLE IF NOT EXISTS Donors ( - Donor_ID INT AUTO_INCREMENT PRIMARY KEY, - Donor_Name VARCHAR(255) NOT NULL, - Donor_Type ENUM('Individual', 'Organization') NOT NULL, - Donation_Amount DECIMAL(15, 2), - NGO_ID INT, - FOREIGN KEY (NGO_ID) REFERENCES WorldNGOs(NGO_ID) -); - -INSERT INTO WorldNGOs (Name, Country, Founding_Year, Focus_Area, Website) -VALUES -('World Wildlife Fund', 'United States', 1961, 'Environmental Conservation', 'https://www.worldwildlife.org'), -('Doctors Without Borders', 'France', 1971, 'Medical Relief', 'https://www.msf.org'), -('Oxfam International', 'United Kingdom', 1995, 'Poverty and Inequality', 'https://www.oxfam.org'), -('Amnesty International', 'United Kingdom', 1961, 'Human Rights', 'https://www.amnesty.org'), -('Save the Children', 'United States', 1919, 'Child Welfare', 'https://www.savethechildren.org'), -('Greenpeace', 'Netherlands', 1971, 'Environmental Protection', 'https://www.greenpeace.org'), -('International Red Cross', 'Switzerland', 1863, 'Humanitarian Aid', 'https://www.icrc.org'), -('CARE International', 'Switzerland', 1945, 'Global Poverty', 'https://www.care-international.org'), -('Habitat for Humanity', 'United States', 1976, 'Affordable Housing', 'https://www.habitat.org'), -('Plan International', 'United Kingdom', 1937, 'Child Rights', 'https://plan-international.org'); - -INSERT INTO Projects (Project_Name, Focus_Area, Budget, NGO_ID, Start_Date, End_Date) -VALUES -('Save the Amazon', 'Environmental Conservation', 5000000.00, 1, '2022-01-01', '2024-12-31'), -('Emergency Medical Aid in Syria', 'Medical Relief', 3000000.00, 2, '2023-03-01', '2023-12-31'), -('Education for All', 'Poverty and Inequality', 2000000.00, 3, '2021-06-01', '2025-05-31'), -('Human Rights Advocacy in Asia', 'Human Rights', 1500000.00, 4, '2022-09-01', '2023-08-31'), -('Child Nutrition Program', 'Child Welfare', 2500000.00, 5, '2022-01-01', '2024-01-01'); - -INSERT INTO Donors (Donor_Name, Donor_Type, Donation_Amount, NGO_ID) -VALUES -('Bill & Melinda Gates Foundation', 'Organization', 10000000.00, 1), -('Elon Musk', 'Individual', 5000000.00, 2), -('Google.org', 'Organization', 2000000.00, 3), -('Open Society Foundations', 'Organization', 3000000.00, 4), -('Anonymous Philanthropist', 'Individual', 1000000.00, 5); - -CREATE TABLE model1_params ( - sequence_number INT, - beta_vals TEXT -); - -INSERT INTO model1_params (sequence_number, beta_vals) VALUES -(1, '[0.25, 0.45, 0.67]'); \ No newline at end of file diff --git a/ml-src/README.md b/ml-src/README.md deleted file mode 100644 index 27b02e7f9e..0000000000 --- a/ml-src/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# ML Source Folder (optional) - -Any ML is optional in the project for CS 3200. - -Use this to store any Jupyter Notebooks or Python files related to the ML components of your project. diff --git a/sandbox.yaml b/sandbox.yaml index 5e0e61f8d8..24232d5131 100644 --- a/sandbox.yaml +++ b/sandbox.yaml @@ -31,7 +31,7 @@ services: - "./database-files:/docker-entrypoint-initdb.d/:ro" - "mysql_data:/var/lib/mysql" ports: - - 3201:3306 + - 3306:3306 volumes: mysql_data: