diff --git a/README.md b/README.md index 94291428ad..4bc58f3c47 100644 --- a/README.md +++ b/README.md @@ -1,78 +1,87 @@ -# Fall 2024 CS 3200 Project Template Repository +# COUPE: TRACE for Co-Ops -This repo is a template for your semester project. It includes most of the infrastructure setup (containers) and sample code and data throughout. Explore it fully and ask questions. +COUPE is a data-driven platform designed to revolutionize the co-op search process for Northeastern University students by offering peer-to-peer insights into company experiences. Our mission is to provide students with transparent, verified, and meaningful reviews on workplace culture, roles, and interview processes to help them find their ideal co-op match. -## Prerequisites +## Key Features +- **Peer-Reviewed Insights** + Access authentic reviews about company culture, roles, and work-life balance written by fellow students. +- **Application and Interview Guidance** + Gain detailed insights into application processes, interview questions, and preparation tips. +- **Incentivized Contribution System** + Share your own co-op experiences to unlock access to additional content and earn platform rewards. -- A GitHub Account -- A terminal-based or GUI git client -- VSCode with the Python Plugin -- A distrobution of Python running on your laptop (Choco (for Windows), brew (for Macs), miniconda, Anaconda, etc). +--- -## Current Project Components +## Target Users +### **Persona 1: Sebastian Studentson** +A second-year CS student seeking guidance for his first co-op, overwhelmed by options and in need of peer insights to confidently navigate his search. -Currently, there are three major components which will each run in their own Docker Containers: +### **Persona 2: Riley Reviewer** +A fourth-year marketing major who wants to share detailed feedback on their co-op experience to help others make informed decisions. -- Streamlit App in the `./app` directory -- Flask REST api in the `./api` directory -- SQL files for your data model and data base in the `./database-files` directory +### **Persona 3: Alex Admin** +The platform administrator responsible for moderating reviews, resolving disputes, and maintaining the integrity of the platform. -## Suggestion for Learning the Project Code Base +### **Persona 4: Annalise Analyst** +A data analyst evaluating user trends and engagement to drive improvements in platform features and user experience. -If you are not familiar with web app development, this code base might be confusing. You will probably want two versions though: -1. One version for you to explore, try things, break things, etc. We'll call this your **Personal Repo** -1. One version of the repo that your team will share. We'll call this the **Team Repo**. +--- +## Technology Stack +- **Frontend:** Streamlit +- **Middleware:** Python Flask +- **Backend:** MySQL +- **Containerization:** Docker +- **Development Tools:** VSCode, DataGrip -### Setting Up Your Personal Repo +--- -1. In GitHub, click the **fork** button in the upper right corner of the repo screen. -1. When prompted, give the new repo a unique name, perhaps including your last name and the word 'personal'. -1. Once the fork has been created, clone YOUR forked version of the repo to your computer. -1. Set up the `.env` file in the `api` folder based on the `.env.template` file. -1. Start the docker containers. +## Database Schema +The database consists of multiple interconnected tables, including: +- **User:** Stores user information, including roles (e.g., Admin, Analyst). +- **Companies:** Lists companies with associated industries and locations. +- **Role:** Details job roles, required skills, and associated companies. +- **Reviews:** Houses peer reviews with fields for headings, content, and engagement metrics. +- **Comments:** Allows threaded discussions on reviews. +- **Badges:** Tracks user achievements and contributions. -### Setting Up Your Team Repo +For a complete SQL DDL script, refer to [schema.sql](#). -Before you start: As a team, one person needs to assume the role of *Team Project Repo Owner*. +--- -1. The Team Project Repo Owner needs to fork this template repo into their own GitHub account **and give the repo a name consistent with your project's name**. If you're worried that the repo is public, don't. Every team is doing a different project. -1. In the newly forked team repo, the Team Project Repo Owner should go to the **Settings** tab, choose **Collaborators and Teams** on the left-side panel. Add each of your team members to the repository with Write access. -1. Each of the other team members will receive an invitation to join. Obviously accept the invite. -1. Once that process is complete, each team member, including the repo owner, should clone the Team's Repo to their local machines (in a different location than your Personal Project Repo). +## REST API Endpoints +Here is an example of the REST API matrix. For detailed documentation, refer to the `API_DOCS.md` in this repository. -## Controlling the Containers +| Resource | GET | POST | PUT | DELETE | +|----------------------|-------------------------------------------|-------------------------------|---------------------|-------------------| +| `/companiesWithReviews` | Return all companies with reviews. | N/A | N/A | N/A | +| `/reviews` | Retrieve reviews based on filters. | Submit a new review. | Update an existing review. | Delete a review. | +| `/admin/flaggedContent` | Retrieve flagged reviews for moderation. | N/A | Resolve flagged content. | Remove flagged content. | -- `docker compose up -d` to start all the containers in the background -- `docker compose down` to shutdown and delete the containers -- `docker compose up db -d` only start the database container (replace db with the other services as needed) -- `docker compose stop` to "turn off" the containers but not delete them. +--- +## ✨ User Stories +### Sebastian Studentson +- As a co-op searcher, I want to find detailed insights about a company’s culture and values to ensure alignment with my work style. -## Handling User Role Access and Control +### Riley Reviewer +- As a former co-op student, I want an easy-to-use feedback submission form to share structured and meaningful reviews. -In most applications, when a user logs in, they assume a particular role. For instance, when one logs in to a stock price prediction app, they may be a single investor, a portfolio manager, or a corporate executive (of a publicly traded company). Each of those *roles* will likely present some similar features as well as some different features when compared to the other roles. So, how do you accomplish this in Streamlit? This is sometimes called Role-based Access Control, or **RBAC** for short. +### Alex Admin +- As an admin, I need to moderate flagged reviews efficiently to maintain the platform's integrity. -The code in this project demonstrates how to implement a simple RBAC system in Streamlit but without actually using user authentication (usernames and passwords). The Streamlit pages from the original template repo are split up among 3 roles - Political Strategist, USAID Worker, and a System Administrator role (this is used for any sort of system tasks such as re-training ML model, etc.). It also demonstrates how to deploy an ML model. +### Annalise Analyst +- As an analyst, I need tools to identify gaps in content, such as underrepresented industries, and track user engagement. -Wrapping your head around this will take a little time and exploration of this code base. Some highlights are below. +--- -### Getting Started with the RBAC -1. We need to turn off the standard panel of links on the left side of the Streamlit app. This is done through the `app/src/.streamlit/config.toml` file. So check that out. We are turning it off so we can control directly what links are shown. -1. Then I created a new python module in `app/src/modules/nav.py`. When you look at the file, you will se that there are functions for basically each page of the application. The `st.sidebar.page_link(...)` adds a single link to the sidebar. We have a separate function for each page so that we can organize the links/pages by role. -1. Next, check out the `app/src/Home.py` file. Notice that there are 3 buttons added to the page and when one is clicked, it redirects via `st.switch_page(...)` to that Roles Home page in `app/src/pages`. But before the redirect, I set a few different variables in the Streamlit `session_state` object to track role, first name of the user, and that the user is now authenticated. -1. Notice near the top of `app/src/Home.py` and all other pages, there is a call to `SideBarLinks(...)` from the `app/src/nav.py` module. This is the function that will use the role set in `session_state` to determine what links to show the user in the sidebar. -1. The pages are organized by Role. Pages that start with a `0` are related to the *Political Strategist* role. Pages that start with a `1` are related to the *USAID worker* role. And, pages that start with a `2` are related to The *System Administrator* role. +## Wireframes +Wireframes for primary features, including the company culture overview and feedback submission forms, can be found in the `wireframes` directory. +--- -## Deploying An ML Model (Totally Optional for CS3200 Project) - -*Note*: This project only contains the infrastructure for a hypothetical ML model. - -1. Build, train, and test your ML model in a Jupyter Notebook. -1. Once you're happy with the model's performance, convert your Jupyter Notebook code for the ML model to a pure python script. You can include the `training` and `testing` functionality as well as the `prediction` functionality. You may or may not need to include data cleaning, though. -1. Check out the `api/backend/ml_models` module. In this folder, I've put a sample (read *fake*) ML model in `model01.py`. The `predict` function will be called by the Flask REST API to perform '*real-time*' prediction based on model parameter values that are stored in the database. **Important**: you would never want to hard code the model parameter weights directly in the prediction function. tl;dr - take some time to look over the code in `model01.py`. -1. The prediction route for the REST API is in `api/backend/customers/customer_routes.py`. Basically, it accepts two URL parameters and passes them to the `prediction` function in the `ml_models` module. The `prediction` route/function packages up the value(s) it receives from the model's `predict` function and send its back to Streamlit as JSON. -1. Back in streamlit, check out `app/src/pages/11_Prediction.py`. Here, I create two numeric input fields. When the button is pressed, it makes a request to the REST API URL `/c/prediction/.../...` function and passes the values from the two inputs as URL parameters. It gets back the results from the route and displays them. Nothing fancy here. - - +## How to Run Locally +1. Clone this repository: + ```bash + git clone https://github.com/zootsuitproductions/coupe.git + cd coupe diff --git a/api/.env.template b/api/.env similarity index 51% rename from api/.env.template rename to api/.env index b24b99326f..66762c5f9f 100644 --- a/api/.env.template +++ b/api/.env @@ -2,5 +2,5 @@ SECRET_KEY=someCrazyS3cR3T!Key.! DB_USER=root DB_HOST=db DB_PORT=3306 -DB_NAME=northwind -MYSQL_ROOT_PASSWORD= +DB_NAME=CoopPlatform +MYSQL_ROOT_PASSWORD=CoopPCrazyS diff --git a/api/backend/companies/companies_routes.py b/api/backend/companies/companies_routes.py new file mode 100644 index 0000000000..6c91fe9cae --- /dev/null +++ b/api/backend/companies/companies_routes.py @@ -0,0 +1,166 @@ +######################################################## +# Companies blueprint of endpoints +######################################################## +from flask import Blueprint +from flask import request +from flask import jsonify +from flask import make_response +from flask import current_app +from backend.db_connection import db +from backend.ml_models.model01 import predict + +#------------------------------------------------------------ +# Create a new Blueprint object, which is a collection of +# routes. +companies = Blueprint('companies', __name__) + + +#------------------------------------------------------------ +# Get all feedback from the system +@companies.route('/companies', methods=['GET']) +def get_companies(): + try: + with db.get_db().cursor() as cursor: + cursor.execute(''' + SELECT C.companyID, C.name, C.description, C.updatedAT, I.name, L.address, L.city, L.state_province, L.country, R.roleName, R.description, R.skillsRequired, L.locationID AS `Location ID`, I.IndustryID AS `Industry ID`, R.roleID AS `Role ID` + FROM Companies C JOIN Location L + ON C.companyID = L.companyID JOIN CompanyIndustry CI + ON CI.companyID = C.companyID JOIN Industries I + ON I.industryID = CI.industryID JOIN Role R + ON R.companyID = C.companyID + ORDER BY C.companyID, C.name ASC; + ''') + theData = cursor.fetchall() + + the_response = make_response(jsonify(theData)) + the_response.status_code = 200 + return the_response + + except Exception as e: + current_app.logger.error(f"Error fetching companies: {e}") + return {"error": "An error occurred while fetching companies"}, 500 + + +@companies.route('/companies/companies', methods=['PUT']) +def update_company(): + current_app.logger.info('PUT /companies route') + try: + companies_info = request.json + + if 'companyID' not in companies_info or 'Company Description' not in companies_info: + return {'error': 'Missing companyID or Company Description'}, 400 + + company_id = companies_info['companyID'] + company_discription = companies_info['Company Description'] + + + query = ''' + UPDATE Companies + SET description = %s + WHERE companyID = %s + ''' + + data = (company_discription, company_id) + + cursor = db.get_db().cursor() + cursor.execute(query, data) + db.get_db().commit() + + return {'message': 'Companies updated successfully'}, 200 + + except KeyError as e: + current_app.logger.error(f"Missing key in request JSON: {str(e)}") + return {'error': f'Missing key: {str(e)}'}, 400 + except Exception as e: + current_app.logger.error(f"Error updating feedback: {str(e)}") + return {'error': 'An error occurred while updating companies'}, 500 + finally: + if 'cursor' in locals() and cursor: + cursor.close() + + +@companies.route('/companies/roles', methods=['PUT']) +def update_roles(): + current_app.logger.info('PUT /feedback route') + try: + companies_info = request.json + + if 'RoleID' not in companies_info or 'Skills Required' not in companies_info or 'Role Description' not in companies_info: + return {'error': 'Missing RoleID or Skills Required or Role Description'}, 400 + + company_id = companies_info['companyID'] + role_id = companies_info['RoleID'] + skill = companies_info['Skills Required'] + role_description = companies_info['Role Description'] + + query = ''' + UPDATE Role + SET description = %s, + skillsRequired = %s + WHERE companyID = %s AND roleID = %s; + ''' + + data = (role_description, skill, company_id, role_id) + + cursor = db.get_db().cursor() + cursor.execute(query, data) + db.get_db().commit() + + return {'message': 'Roles updated successfully'}, 200 + + except KeyError as e: + current_app.logger.error(f"Missing key in request JSON: {str(e)}") + return {'error': f'Missing key: {str(e)}'}, 400 + except Exception as e: + current_app.logger.error(f"Error updating roles: {str(e)}") + return {'error': 'An error occurred while updating roles'}, 500 + finally: + if 'cursor' in locals() and cursor: + cursor.close() + +@companies.route('/industries', methods=['GET']) +def get_industries(): + try: + with db.get_db().cursor() as cursor: + cursor.execute(''' + SELECT count(C.companyID) AS NumCompany, I.name AS Industry + FROM Companies C JOIN CompanyIndustry CI + ON CI.companyID = C.companyID JOIN Industries I + ON I.industryID = CI.industryID + GROUP BY I.industryID + ORDER BY I.industryID + ''') + theData = cursor.fetchall() + + the_response = make_response(jsonify(theData)) + the_response.status_code = 200 + return the_response + + except Exception as e: + current_app.logger.error(f"Error fetching companies: {e}") + return {"error": "An error occurred while fetching companies"}, 500 + +@companies.route('/reviews', methods=['GET']) +def get_reviews(): + try: + with db.get_db().cursor() as cursor: + cursor.execute(''' + SELECT count(R.reviewID) AS NumReviews, I.name AS Industry + FROM Companies C JOIN CompanyIndustry CI + ON CI.companyID = C.companyID JOIN Industries I + ON I.industryID = CI.industryID JOIN Role RO + ON RO.CompanyID = C.CompanyID JOIN Reviews R + ON RO.roleID = R.roleID + GROUP BY I.industryID + ORDER BY I.industryID + ''') + theData = cursor.fetchall() + + the_response = make_response(jsonify(theData)) + the_response.status_code = 200 + return the_response + + except Exception as e: + current_app.logger.error(f"Error fetching reviews: {e}") + return {"error": "An error occurred while fetching reviews"}, 500 + diff --git a/api/backend/coop_searcher/searcher_routes.py b/api/backend/coop_searcher/searcher_routes.py new file mode 100644 index 0000000000..3a3c8a1ec5 --- /dev/null +++ b/api/backend/coop_searcher/searcher_routes.py @@ -0,0 +1,234 @@ +######################################################## +# Sample customers blueprint of endpoints +# Remove this file if you are not using it in your project +######################################################## +from flask import Blueprint +from flask import request +from flask import jsonify +from flask import make_response +from flask import current_app +from backend.db_connection import db +from backend.ml_models.model01 import predict + +#------------------------------------------------------------ +# Create a new Blueprint object, which is a collection of +# routes. +searcher = Blueprint('coop_searcher', __name__) + +#------------------------------------------------------------ +# Get all customers from the system +@searcher.route('/companiesWithReviews', methods=['GET']) +def get_companies_with_reviews(): + + query = ''' + SELECT DISTINCT c.name AS CompanyName + FROM Companies c + JOIN Role r ON c.companyID = r.companyID + JOIN Reviews rv ON r.roleID = rv.roleID; + ''' + + # get the database connection, execute the query, and + # fetch the results as a Python Dictionary + cursor = db.get_db().cursor() + cursor.execute(query) + theData = cursor.fetchall() + + + response = make_response(jsonify(theData)) + response.status_code = 200 + return response + +# Get reviews for a specific company +@searcher.route('/reviewsForCompany/', methods=['GET']) +def get_reviews_for_company(company_name): + + # Use parameterized query to prevent SQL injection + query = ''' + SELECT + rv.content, + rv.publishedAt, + rv.userID, + rv.reviewID, + rv.views, + rv.likes, + rv.heading, + r.roleName, + r.roleID, + rv.reviewType + FROM + Reviews rv + JOIN + Role r ON rv.roleID = r.roleID + JOIN + Companies c ON r.companyID = c.companyID + WHERE + c.name = %s + ''' + + # Get the database connection, execute the query, and + # fetch the results as a Python Dictionary + cursor = db.get_db().cursor() + cursor.execute(query, (company_name,)) # Passing company_name as a parameter + theData = cursor.fetchall() + + # Format the results into a structured dictionary + results = [ + { + "content": row["content"], + "heading": row["heading"], + "roleName": row["roleName"], + "reviewID": row["reviewID"], + "likes": row["likes"], + "views": row["views"], + "roleID": row["roleID"], + "publishedAt": row["publishedAt"], + "reviewType": row["reviewType"], + "userID": row["userID"] + } + for row in theData + ] + + # Make the response + response = make_response(jsonify(results)) + response.status_code = 200 + return response + + +# Get reviews for a specific company +@searcher.route('/interviewReportsForCompany/', methods=['GET']) +def get_interview_reports_for_company(company_name): + + # Use parameterized query to prevent SQL injection + query = ''' + SELECT + rv.content + FROM + Reviews rv + JOIN + Role r ON rv.roleID = r.roleID + JOIN + Companies c ON r.companyID = c.companyID + WHERE + c.name = %s + AND + rv.reviewType = 'InterviewReport' + + ''' + + # get the database connection, execute the query, and + # fetch the results as a Python Dictionary + cursor = db.get_db().cursor() + cursor.execute(query, (company_name,)) # Passing company_name as a parameter + theData = cursor.fetchall() + + # Make the response + response = make_response(jsonify(theData)) + response.status_code = 200 + return response + + +# Get possible skills +@searcher.route('/possibleSkills', methods=['GET']) +def get_possible_skills(): + # Query to get the skillsRequired for all roles + query = ''' + SELECT r.skillsRequired + FROM Role r; + ''' + + # Get the database connection, execute the query, and fetch the results + cursor = db.get_db().cursor() + cursor.execute(query) + theData = cursor.fetchall() + response = make_response(jsonify(theData)) + response.status_code = 200 + return response + +# Get roles that require a specific skill +@searcher.route('/rolesForSkill/', methods=['GET']) +def get_roles_for_skill(skill): + try: + # Query to find roles that require the specified skill + query = ''' + SELECT r.roleName, c.name AS companyName, r.skillsRequired + FROM Role r + JOIN Companies c ON r.companyID = c.companyID + WHERE r.skillsRequired LIKE %s; + ''' + + # Get the database connection, execute the query, and fetch the results + cursor = db.get_db().cursor() + cursor.execute(query, ('%' + skill + '%',)) # Using LIKE to match the skill within skillsRequired + theData = cursor.fetchall() + + # If no roles are found, log a warning + if not theData: + current_app.logger.warning(f"No roles found requiring the skill: {skill}") + + # Make the response + response = make_response(jsonify(theData)) + response.status_code = 200 + return response + + except Exception as e: + # Log the error and return a 500 error response + current_app.logger.error(f"Error fetching roles for skill {skill}: {e}") + return make_response(jsonify({"error": "Database query failed"}), 500) + + +# Increment likes for a specific review +@searcher.route('/review//like', methods=['POST']) +def like_review(review_id): + try: + # Query to increment the likes for the given review ID + query = ''' + UPDATE Reviews + SET likes = likes + 1 + WHERE reviewID = %s; + ''' + + # Get the database connection and execute the query + cursor = db.get_db().cursor() + cursor.execute(query, (review_id,)) + db.get_db().commit() + + # Check if any row was updated + if cursor.rowcount == 0: + return make_response(jsonify({"message": "Review not found"}), 404) + + # Return a success response + return make_response(jsonify({"message": "Review liked successfully!"}), 200) + + except Exception as e: + # Log the error and return a 500 error response + current_app.logger.error(f"Error liking review with ID {review_id}: {e}") + return make_response(jsonify({"error": "Database query failed"}), 500) + + +# Flag a review +@searcher.route('/review//flag', methods=['POST']) +def flag_review(review_id): + try: + # Query to flag the review + query = ''' + UPDATE Reviews + SET isFlagged = TRUE + WHERE reviewID = %s; + ''' + + # Get the database connection and execute the query + cursor = db.get_db().cursor() + cursor.execute(query, (review_id,)) + db.get_db().commit() + + # Check if any row was updated + if cursor.rowcount == 0: + return make_response(jsonify({"message": "Review not found"}), 404) + + # Return a success response + return make_response(jsonify({"message": "Review flagged successfully!"}), 200) + + except Exception as e: + # Log the error and return a 500 error response + current_app.logger.error(f"Error flagging review with ID {review_id}: {e}") + return make_response(jsonify({"error": "Database query failed"}), 500) \ No newline at end of file diff --git a/api/backend/customers/customer_routes.py b/api/backend/customers/customer_routes.py deleted file mode 100644 index 4fda460220..0000000000 --- a/api/backend/customers/customer_routes.py +++ /dev/null @@ -1,83 +0,0 @@ -######################################################## -# Sample customers blueprint of endpoints -# Remove this file if you are not using it in your project -######################################################## -from flask import Blueprint -from flask import request -from flask import jsonify -from flask import make_response -from flask import current_app -from backend.db_connection import db -from backend.ml_models.model01 import predict - -#------------------------------------------------------------ -# Create a new Blueprint object, which is a collection of -# routes. -customers = Blueprint('customers', __name__) - - -#------------------------------------------------------------ -# Get all customers from the system -@customers.route('/customers', methods=['GET']) -def get_customers(): - - cursor = db.get_db().cursor() - cursor.execute('''SELECT id, company, last_name, - first_name, job_title, business_phone FROM customers - ''') - - theData = cursor.fetchall() - - the_response = make_response(jsonify(theData)) - the_response.status_code = 200 - return the_response - -#------------------------------------------------------------ -# Update customer info for customer with particular userID -# Notice the manner of constructing the query. -@customers.route('/customers', methods=['PUT']) -def update_customer(): - current_app.logger.info('PUT /customers route') - cust_info = request.json - cust_id = cust_info['id'] - first = cust_info['first_name'] - last = cust_info['last_name'] - company = cust_info['company'] - - query = 'UPDATE customers SET first_name = %s, last_name = %s, company = %s where id = %s' - data = (first, last, company, cust_id) - cursor = db.get_db().cursor() - r = cursor.execute(query, data) - db.get_db().commit() - return 'customer updated!' - -#------------------------------------------------------------ -# Get customer detail for customer with particular userID -# Notice the manner of constructing the query. -@customers.route('/customers/', methods=['GET']) -def get_customer(userID): - current_app.logger.info('GET /customers/ route') - cursor = db.get_db().cursor() - cursor.execute('SELECT id, first_name, last_name FROM customers WHERE id = {0}'.format(userID)) - - theData = cursor.fetchall() - - the_response = make_response(jsonify(theData)) - the_response.status_code = 200 - return the_response - -#------------------------------------------------------------ -# Makes use of the very simple ML model in to predict a value -# and returns it to the user -@customers.route('/prediction//', methods=['GET']) -def predict_value(var01, var02): - current_app.logger.info(f'var01 = {var01}') - current_app.logger.info(f'var02 = {var02}') - - returnVal = predict(var01, var02) - return_dict = {'result': returnVal} - - the_response = make_response(jsonify(return_dict)) - the_response.status_code = 200 - the_response.mimetype = 'application/json' - return the_response \ No newline at end of file diff --git a/api/backend/feedback/feedback_routes.py b/api/backend/feedback/feedback_routes.py new file mode 100644 index 0000000000..a2bfdd5ebf --- /dev/null +++ b/api/backend/feedback/feedback_routes.py @@ -0,0 +1,76 @@ +######################################################## +# Feedback blueprint of endpoints +######################################################## +from flask import Blueprint +from flask import request +from flask import jsonify +from flask import make_response +from flask import current_app +from backend.db_connection import db +from backend.ml_models.model01 import predict + +#------------------------------------------------------------ +# Create a new Blueprint object, which is a collection of +# routes. +feedback = Blueprint('feedback', __name__) + + +#------------------------------------------------------------ +# Get all feedback from the system +@feedback.route('/feedback', methods=['GET']) +def get_feedback(): + try: + with db.get_db().cursor() as cursor: + cursor.execute(''' + SELECT feedbackID, userID, timestamp, header, content, status + FROM Feedback; + ''') + theData = cursor.fetchall() + + the_response = make_response(jsonify(theData)) + the_response.status_code = 200 + return the_response + + except Exception as e: + current_app.logger.error(f"Error fetching feedback: {e}") + return {"error": "An error occurred while fetching feedback"}, 500 + + +@feedback.route('/feedback', methods=['PUT']) +def update_status(): + current_app.logger.info('PUT /feedback route') + try: + feedback_info = request.json + + if 'feedbackID' not in feedback_info or 'status' not in feedback_info: + return {'error': 'Missing feedbackID or status'}, 400 + + status = feedback_info['status'] + feedback_id = feedback_info['feedbackID'] + + + query = ''' + UPDATE Feedback + SET status = %s + WHERE feedbackID = %s; + ''' + + + data = (status, feedback_id) + + cursor = db.get_db().cursor() + cursor.execute(query, data) + db.get_db().commit() + + return {'message': 'Feedback status updated successfully'}, 200 + + except KeyError as e: + current_app.logger.error(f"Missing key in request JSON: {str(e)}") + return {'error': f'Missing key: {str(e)}'}, 400 + except Exception as e: + current_app.logger.error(f"Error updating feedback: {str(e)}") + return {'error': 'An error occurred while updating feedback status'}, 500 + finally: + if 'cursor' in locals() and cursor: + cursor.close() + \ No newline at end of file diff --git a/api/backend/ml_models/model01.py b/api/backend/ml_models/model01.py deleted file mode 100644 index 368152fbab..0000000000 --- a/api/backend/ml_models/model01.py +++ /dev/null @@ -1,48 +0,0 @@ -""" -model01.py is an example of how to access model parameter values that you are storing -in the database and use them to make a prediction when a route associated with prediction is -accessed. -""" -from backend.db_connection import db -import numpy as np -import logging - - -def train(): - """ - You could have a function that performs training from scratch as well as testing (see below). - It could be activated from a route for an "administrator role" or something similar. - """ - return 'Training the model' - -def test(): - return 'Testing the model' - -def predict(var01, var02): - """ - Retreives model parameters from the database and uses them for real-time prediction - """ - # get a database cursor - cursor = db.get_db().cursor() - # get the model params from the database - query = 'SELECT beta_vals FROM model1_params ORDER BY sequence_number DESC LIMIT 1' - cursor.execute(query) - return_val = cursor.fetchone() - - params = return_val['beta_vals'] - logging.info(f'params = {params}') - logging.info(f'params datatype = {type(params)}') - - # turn the values from the database into a numpy array - params_array = np.array(list(map(float, params[1:-1].split(',')))) - logging.info(f'params array = {params_array}') - logging.info(f'params_array datatype = {type(params_array)}') - - # turn the variables sent from the UI into a numpy array - input_array = np.array([1.0, float(var01), float(var02)]) - - # calculate the dot product (since this is a fake regression) - prediction = np.dot(params_array, input_array) - - return prediction - diff --git a/api/backend/products/products_routes.py b/api/backend/products/products_routes.py deleted file mode 100644 index a3e596d0d3..0000000000 --- a/api/backend/products/products_routes.py +++ /dev/null @@ -1,208 +0,0 @@ -######################################################## -# Sample customers blueprint of endpoints -# Remove this file if you are not using it in your project -######################################################## - -from flask import Blueprint -from flask import request -from flask import jsonify -from flask import make_response -from flask import current_app -from backend.db_connection import db - -#------------------------------------------------------------ -# Create a new Blueprint object, which is a collection of -# routes. -products = Blueprint('products', __name__) - -#------------------------------------------------------------ -# Get all the products from the database, package them up, -# and return them to the client -@products.route('/products', methods=['GET']) -def get_products(): - query = ''' - SELECT id, - product_code, - product_name, - list_price, - category - FROM products - ''' - - # get a cursor object from the database - cursor = db.get_db().cursor() - - # use cursor to query the database for a list of products - cursor.execute(query) - - # fetch all the data from the cursor - # The cursor will return the data as a - # Python Dictionary - theData = cursor.fetchall() - - # Create a HTTP Response object and add results of the query to it - # after "jasonify"-ing it. - response = make_response(jsonify(theData)) - # set the proper HTTP Status code of 200 (meaning all good) - response.status_code = 200 - # send the response back to the client - return response - -# ------------------------------------------------------------ -# get product information about a specific product -# notice that the route takes and then you see id -# as a parameter to the function. This is one way to send -# parameterized information into the route handler. -@products.route('/product/', methods=['GET']) -def get_product_detail (id): - - query = f'''SELECT id, - product_name, - description, - list_price, - category - FROM products - WHERE id = {str(id)} - ''' - - # logging the query for debugging purposes. - # The output will appear in the Docker logs output - # This line has nothing to do with actually executing the query... - # It is only for debugging purposes. - current_app.logger.info(f'GET /product/ query={query}') - - # get the database connection, execute the query, and - # fetch the results as a Python Dictionary - cursor = db.get_db().cursor() - cursor.execute(query) - theData = cursor.fetchall() - - # Another example of logging for debugging purposes. - # You can see if the data you're getting back is what you expect. - current_app.logger.info(f'GET /product/ Result of query = {theData}') - - response = make_response(jsonify(theData)) - response.status_code = 200 - return response - -# ------------------------------------------------------------ -# Get the top 5 most expensive products from the database -@products.route('/mostExpensive') -def get_most_pop_products(): - - query = ''' - SELECT product_code, - product_name, - list_price, - reorder_level - FROM products - ORDER BY list_price DESC - LIMIT 5 - ''' - - # Same process as handler above - cursor = db.get_db().cursor() - cursor.execute(query) - theData = cursor.fetchall() - - response = make_response(jsonify(theData)) - response.status_code = 200 - return response - -# ------------------------------------------------------------ -# Route to get the 10 most expensive items from the -# database. -@products.route('/tenMostExpensive', methods=['GET']) -def get_10_most_expensive_products(): - - query = ''' - SELECT product_code, - product_name, - list_price, - reorder_level - FROM products - ORDER BY list_price DESC - LIMIT 10 - ''' - - # Same process as above - cursor = db.get_db().cursor() - cursor.execute(query) - theData = cursor.fetchall() - - response = make_response(jsonify(theData)) - response.status_code = 200 - return response - - -# ------------------------------------------------------------ -# This is a POST route to add a new product. -# Remember, we are using POST routes to create new entries -# in the database. -@products.route('/product', methods=['POST']) -def add_new_product(): - - # In a POST request, there is a - # collecting data from the request object - the_data = request.json - current_app.logger.info(the_data) - - #extracting the variable - name = the_data['product_name'] - description = the_data['product_description'] - price = the_data['product_price'] - category = the_data['product_category'] - - query = f''' - INSERT INTO products (product_name, - description, - category, - list_price) - VALUES ('{name}', '{description}', '{category}', {str(price)}) - ''' - # TODO: Make sure the version of the query above works properly - # Constructing the query - # query = 'insert into products (product_name, description, category, list_price) values ("' - # query += name + '", "' - # query += description + '", "' - # query += category + '", ' - # query += str(price) + ')' - current_app.logger.info(query) - - # executing and committing the insert statement - cursor = db.get_db().cursor() - cursor.execute(query) - db.get_db().commit() - - response = make_response("Successfully added product") - response.status_code = 200 - return response - -# ------------------------------------------------------------ -### Get all product categories -@products.route('/categories', methods = ['GET']) -def get_all_categories(): - query = ''' - SELECT DISTINCT category AS label, category as value - FROM products - WHERE category IS NOT NULL - ORDER BY category - ''' - - cursor = db.get_db().cursor() - cursor.execute(query) - theData = cursor.fetchall() - - response = make_response(jsonify(theData)) - response.status_code = 200 - return response - -# ------------------------------------------------------------ -# This is a stubbed route to update a product in the catalog -# The SQL query would be an UPDATE. -@products.route('/product', methods = ['PUT']) -def update_product(): - product_info = request.json - current_app.logger.info(product_info) - - return "Success" \ No newline at end of file diff --git a/api/backend/rest_entry.py b/api/backend/rest_entry.py index d8d78502d9..f5e1e5e7d8 100644 --- a/api/backend/rest_entry.py +++ b/api/backend/rest_entry.py @@ -1,9 +1,11 @@ from flask import Flask from backend.db_connection import db -from backend.customers.customer_routes import customers -from backend.products.products_routes import products +from backend.reviews.reviews_routes import reviews from backend.simple.simple_routes import simple_routes +from backend.companies.companies_routes import companies +from backend.feedback.feedback_routes import feedback +from backend.coop_searcher.searcher_routes import searcher import os from dotenv import load_dotenv @@ -40,9 +42,10 @@ def create_app(): # and give a url prefix to each app.logger.info('current_app(): registering blueprints with Flask app object.') app.register_blueprint(simple_routes) - app.register_blueprint(customers, url_prefix='/c') - app.register_blueprint(products, url_prefix='/p') - + app.register_blueprint(companies, url_prefix='/c') + app.register_blueprint(feedback, url_prefix='/f') + app.register_blueprint(reviews, url_prefix='/r') + app.register_blueprint(searcher, url_prefix='/s') # Don't forget to return the app object return app diff --git a/api/backend/reviews/reviews_routes.py b/api/backend/reviews/reviews_routes.py new file mode 100644 index 0000000000..93c973dff6 --- /dev/null +++ b/api/backend/reviews/reviews_routes.py @@ -0,0 +1,635 @@ +######################################################## +# Sample customers blueprint of endpoints +# Remove this file if you are not using it in your project +######################################################## + +from flask import Blueprint +from flask import request +from flask import jsonify +from flask import make_response +from flask import current_app +from backend.db_connection import db + +from datetime import datetime + +#------------------------------------------------------------ +# Create a new Blueprint object, which is a collection of +# routes. +reviews = Blueprint('reviews', __name__) +@reviews.route('/reviews', methods=['GET']) +def get_reviews(): + try: + with db.get_db().cursor() as cursor: + cursor.execute(''' + SELECT * + FROM Reviews; + ''') + theData = cursor.fetchall() + + the_response = make_response(jsonify(theData)) + the_response.status_code = 200 + return the_response + + except Exception as e: + current_app.logger.error(f"Error fetching reviews: {e}") + return {"error": "An error occurred while fetching reviews"}, 500 + +@reviews.route('/comments', methods=['GET']) +def get_comments(): + try: + with db.get_db().cursor() as cursor: + cursor.execute(''' + SELECT * + FROM Comments + WHERE isFlagged = TRUE; + ''') + theData = cursor.fetchall() + + the_response = make_response(jsonify(theData)) + the_response.status_code = 200 + return the_response + + except Exception as e: + current_app.logger.error(f"Error fetching comments: {e}") + return {"error": "An error occurred while fetching comments"}, 500 + +@reviews.route('/flagged', methods=['GET']) +def get_flaggedreview(): + try: + with db.get_db().cursor() as cursor: + cursor.execute(''' + SELECT reviewID, reviewType, heading, content + FROM Reviews + WHERE isFlagged = TRUE; + ''') + theData = cursor.fetchall() + + the_response = make_response(jsonify(theData)) + the_response.status_code = 200 + return the_response + + except Exception as e: + current_app.logger.error(f"Error fetching flagged reviewss: {e}") + return {"error": "An error occurred while fetching flagged reviews"}, 500 + +@reviews.route('/approveflagged', methods=['PUT']) +def approve_flaggedreview(): + current_app.logger.info('PUT /reviews route') + try: + review_info = request.json + + if 'reviewID' not in review_info or 'isFlagged' not in review_info: + return {'error': 'Missing reviewID or isFlagged'}, 400 + + review_id = review_info['reviewID'] + if_flagged = review_info['isFlagged'] + + + query = ''' + UPDATE Reviews + SET isFlagged = %s + WHERE reviewID= %s; + ''' + + data = (if_flagged, review_id) + + cursor = db.get_db().cursor() + cursor.execute(query, data) + db.get_db().commit() + + return {'message': 'Reviews status updated successfully'}, 200 + + except KeyError as e: + current_app.logger.error(f"Missing key in request JSON: {str(e)}") + return {'error': f'Missing key: {str(e)}'}, 400 + except Exception as e: + current_app.logger.error(f"Error updating reviews: {str(e)}") + return {'error': 'An error occurred while updating reviews status'}, 500 + finally: + if 'cursor' in locals() and cursor: + cursor.close() + +@reviews.route('/editflagged', methods=['PUT']) +def edit_flaggedreview(): + current_app.logger.info('PUT /reviews route') + try: + review_info = request.json + + if 'reviewID' not in review_info or 'content' not in review_info: + return {'error': 'Missing reviewID or content'}, 400 + + review_id = review_info['reviewID'] + content = review_info['content'] + + + query = ''' + UPDATE Reviews + SET isFlagged = FALSE, + content = %s + WHERE reviewID= %s; + ''' + + data = (content, review_id) + + cursor = db.get_db().cursor() + cursor.execute(query, data) + db.get_db().commit() + + return {'message': 'Reviews status updated successfully'}, 200 + + except KeyError as e: + current_app.logger.error(f"Missing key in request JSON: {str(e)}") + return {'error': f'Missing key: {str(e)}'}, 400 + except Exception as e: + current_app.logger.error(f"Error updating reviews: {str(e)}") + return {'error': 'An error occurred while updating reviews status'}, 500 + finally: + if 'cursor' in locals() and cursor: + cursor.close() + +@reviews.route('/removeflagged', methods=['DELETE']) +def remove_flaggedreview(): + current_app.logger.info('DELETE /removeflagged route') + try: + review_info = request.json + + if 'reviewID' not in review_info: + return {'error': 'Missing reviewID'}, 400 + + review_id = review_info['reviewID'] + + try: + review_id = int(review_id) + except ValueError: + return {'error': 'Invalid reviewID format'}, 400 + + query = ''' + DELETE FROM Reviews + WHERE reviewID = %s; + ''' + data = (review_id,) + + cursor = db.get_db().cursor() + cursor.execute(query, data) + db.get_db().commit() + + return {'message': 'Review deleted successfully'}, 200 + + except KeyError as e: + current_app.logger.error(f"Missing key in request JSON: {str(e)}") + return {'error': f'Missing key: {str(e)}'}, 400 + except Exception as e: + current_app.logger.error(f"Error deleting review: {str(e)}") + return {'error': 'An error occurred while deleting the review.'}, 500 + finally: + if cursor is not None: + cursor.close() + + +@reviews.route('/submitReview', methods=['POST']) +def add_review(): + review_data = request.json + try: + # Get current timestamp for publishedAt + current_time = datetime.now().strftime('%Y-%m-%d %H:%M:%S') + + # Construct SQL query to insert the review + query = """ + INSERT INTO Reviews (userID, roleID, publishedAt, reviewType, heading, content, views, likes, isFlagged) + VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s) + """ + values = ( + review_data["userID"], + review_data["roleID"], + current_time, # Use the current timestamp for publishedAt + review_data["reviewType"], + review_data["heading"], + review_data["content"], + 0, # Default views + 0, # Default likes + False # Default isFlagged + ) + + # Execute the query + cursor = db.get_db().cursor() + cursor.execute(query, values) + db.get_db().commit() + + # Return success response + return jsonify({"message": "Review added successfully!"}), 201 + + except Exception as e: + # Handle errors and return an error response + error_message = str(e) + print("Error:", error_message) + print(traceback.format_exc()) + return jsonify({"error": error_message}), 500 + + +# Assuming `reviews` is your Blueprint for this module +@reviews.route('/addComment', methods=['POST']) +def add_comment(): + comment_data = request.json + try: + # Get current timestamp for createdAt + current_time = datetime.now().strftime('%Y-%m-%d %H:%M:%S') + + # Construct SQL query to insert the comment + query = """ + INSERT INTO Comments (reviewID, userID, content, createdAt, likes) + VALUES (%s, %s, %s, %s, %s) + """ + values = ( + comment_data["reviewID"], # Review ID to link the comment to + comment_data["userID"], # User ID of the commenter + comment_data["content"], # The comment text + current_time, # Use the current timestamp for createdAt + 0 # Default likes to 0 + ) + + # Execute the query + cursor = db.get_db().cursor() + cursor.execute(query, values) + db.get_db().commit() + + # Return success response + return jsonify({"message": "Comment added successfully!"}), 201 + + except Exception as e: + # Handle errors and return an error response + error_message = str(e) + print("Error:", error_message) + print(traceback.format_exc()) + return jsonify({"error": error_message}), 500 + +# ------------------------------------------------------------ +# get product information about a specific product +# notice that the route takes and then you see id +# as a parameter to the function. This is one way to send +# parameterized information into the route handler. +@reviews.route('/reviewsByUser/', methods=['GET']) +def get_product_detail (id): + + query = f''' + SELECT + r.reviewID, + r.roleID, + r.createdAt, + r.updatedAt, + r.publishedAt, + r.reviewType, + r.heading, + r.content, + r.views, + r.likes, + r.isFlagged +FROM + Reviews r +WHERE + r.userID = {str(id)}; + ''' + + # get the database connection, execute the query, and + # fetch the results as a Python Dictionary + cursor = db.get_db().cursor() + cursor.execute(query) + theData = cursor.fetchall() + + + response = make_response(jsonify(theData)) + response.status_code = 200 + return response + +@reviews.route('/roleDetails/', methods=['GET']) +def get_role_details(role_id): + query = f''' + SELECT + r.roleName, + c.name + FROM + Role r + JOIN + Companies c ON r.companyID = c.companyID + WHERE + r.roleID = {str(role_id)}; + ''' + + # Get the database connection, execute the query, and fetch the results as a Python Dictionary + cursor = db.get_db().cursor() + cursor.execute(query) + role_data = cursor.fetchone() # Assuming the result is a single row + + # If no result is found + if not role_data: + return jsonify({'error': 'Role not found'}), 404 + + response = { + 'roleName': role_data['roleName'], + 'companyName': role_data['name'] + } + + return make_response(jsonify(response), 200) + +@reviews.route('/companies', methods=['GET']) +def get_companies(): + query = ''' + SELECT + c.companyID, + c.name AS company_name, + c.description AS company_description, + c.createdAt AS company_createdAt, + c.updatedAt AS company_updatedAt + FROM + Companies c + ''' + # Execute the query + cursor = db.get_db().cursor() + cursor.execute(query) + theData = cursor.fetchall() + + + response = make_response(jsonify(theData)) + response.status_code = 200 + return response + +@reviews.route('/rolesByCompany/', methods=['GET']) +def get_roles_by_company(company_id): + query = f''' + SELECT + r.roleID, + r.roleName, + r.description AS role_description, + r.skillsRequired, + c.name AS company_name, + c.description AS company_description, + c.createdAt AS company_createdAt, + c.updatedAt AS company_updatedAt + FROM + Role r + JOIN + Companies c ON r.companyID = c.companyID + WHERE + r.companyID = {company_id}; + ''' + + # Get the database connection, execute the query, and fetch the results as a Python Dictionary + cursor = db.get_db().cursor() + cursor.execute(query) + theData = cursor.fetchall() + + # Create the response + response = make_response(jsonify(theData)) + response.status_code = 200 + return response + + + +@reviews.route('/commentsByReview/', methods=['GET']) +def get_comments_by_review(reviewID): + query = f''' + SELECT + c.commentID, + c.userID, + c.parentCommentID, + c.createdAt, + c.content, + c.likes + FROM + Comments c + WHERE + c.reviewID = {str(reviewID)} + ORDER BY + c.createdAt ASC; + ''' + + # Get the database connection, execute the query, and + # fetch the results as a Python Dictionary + cursor = db.get_db().cursor() + cursor.execute(query) + theData = cursor.fetchall() + + response = make_response(jsonify(theData)) + response.status_code = 200 + return response + +@reviews.route('/deleteReview/', methods=['DELETE']) +def delete_review(review_id): + try: + # Define the query to delete the review + query = f''' + DELETE FROM Reviews + WHERE reviewID = {review_id}; + ''' + + # Get the database connection and execute the query + cursor = db.get_db().cursor() + cursor.execute(query) + db.get_db().commit() + + # Check if any row was affected (deleted) + if cursor.rowcount > 0: + response = make_response(jsonify({"message": "Review deleted successfully"})) + response.status_code = 200 + else: + response = make_response(jsonify({"message": "Review not found"})) + response.status_code = 404 + except Exception as e: + # Log the exception and return a 500 error + logger.error(f"Error deleting review: {e}") + response = make_response(jsonify({"message": "Internal Server Error", "error": str(e)})) + response.status_code = 500 + + return response + +@reviews.route('/updateReview', methods=['PUT']) +def update_review(): + data = request.get_json() + + reviewID = data.get('reviewID') + heading = data.get('heading') + content = data.get('content') + + print("NEW CONTENT: ", content) + + query = ''' + UPDATE Reviews + SET heading = %s, content = %s + WHERE reviewID = %s; + ''' + + cursor = db.get_db().cursor() + cursor.execute(query, (heading, content, reviewID)) + db.get_db().commit() + + response = make_response(jsonify({"message": "Review updated successfully"})) + response.status_code = 200 + return response + + + +#------------------------------------------------------------ +# Get all the products from the database, package them up, +# and return them to the client +@reviews.route('/products', methods=['GET']) +def get_products(): + query = ''' + SELECT + r.reviewID, + r.roleID, + r.createdAt, + r.updatedAt, + r.publishedAt, + r.reviewType, + r.heading, + r.content, + r.views, + r.likes, + r.isFlagged +FROM + Reviews r +WHERE + r.userID = ; + ''' + + # get a cursor object from the database + cursor = db.get_db().cursor() + + # use cursor to query the database for a list of products + cursor.execute(query) + + # fetch all the data from the cursor + # The cursor will return the data as a + # Python Dictionary + theData = cursor.fetchall() + + # Create a HTTP Response object and add results of the query to it + # after "jasonify"-ing it. + response = make_response(jsonify(theData)) + # set the proper HTTP Status code of 200 (meaning all good) + response.status_code = 200 + # send the response back to the client + return response + + + +# ------------------------------------------------------------ +# Get the top 5 most expensive products from the database +@reviews.route('/mostExpensive') +def get_most_pop_products(): + + query = ''' + SELECT product_code, + product_name, + list_price, + reorder_level + FROM products + ORDER BY list_price DESC + LIMIT 5 + ''' + + # Same process as handler above + cursor = db.get_db().cursor() + cursor.execute(query) + theData = cursor.fetchall() + + response = make_response(jsonify(theData)) + response.status_code = 200 + return response + +# ------------------------------------------------------------ +# Route to get the 10 most expensive items from the +# database. +@reviews.route('/tenMostExpensive', methods=['GET']) +def get_10_most_expensive_products(): + + query = ''' + SELECT product_code, + product_name, + list_price, + reorder_level + FROM products + ORDER BY list_price DESC + LIMIT 10 + ''' + + # Same process as above + cursor = db.get_db().cursor() + cursor.execute(query) + theData = cursor.fetchall() + + response = make_response(jsonify(theData)) + response.status_code = 200 + return response + + +# ------------------------------------------------------------ +# This is a POST route to add a new product. +# Remember, we are using POST routes to create new entries +# in the database. +@reviews.route('/product', methods=['POST']) +def add_new_product(): + + # In a POST request, there is a + # collecting data from the request object + the_data = request.json + current_app.logger.info(the_data) + + #extracting the variable + name = the_data['product_name'] + description = the_data['product_description'] + price = the_data['product_price'] + category = the_data['product_category'] + + query = f''' + INSERT INTO products (product_name, + description, + category, + list_price) + VALUES ('{name}', '{description}', '{category}', {str(price)}) + ''' + # TODO: Make sure the version of the query above works properly + # Constructing the query + # query = 'insert into products (product_name, description, category, list_price) values ("' + # query += name + '", "' + # query += description + '", "' + # query += category + '", ' + # query += str(price) + ')' + current_app.logger.info(query) + + # executing and committing the insert statement + cursor = db.get_db().cursor() + cursor.execute(query) + db.get_db().commit() + + response = make_response("Successfully added product") + response.status_code = 200 + return response + +# ------------------------------------------------------------ +### Get all product categories +@reviews.route('/categories', methods = ['GET']) +def get_all_categories(): + query = ''' + SELECT DISTINCT category AS label, category as value + FROM products + WHERE category IS NOT NULL + ORDER BY category + ''' + + cursor = db.get_db().cursor() + cursor.execute(query) + theData = cursor.fetchall() + + response = make_response(jsonify(theData)) + response.status_code = 200 + return response + +# ------------------------------------------------------------ +# This is a stubbed route to update a product in the catalog +# The SQL query would be an UPDATE. +@reviews.route('/product', methods = ['PUT']) +def update_product(): + product_info = request.json + current_app.logger.info(product_info) + + return "Success" \ No newline at end of file diff --git a/api/backend/simple/playlist.py b/api/backend/simple/playlist.py deleted file mode 100644 index a9e7a9ef03..0000000000 --- a/api/backend/simple/playlist.py +++ /dev/null @@ -1,129 +0,0 @@ -# ------------------------------------------------------------ -# Sample data for testing generated by ChatGPT -# ------------------------------------------------------------ - -sample_playlist_data = { - "playlist": { - "id": "37i9dQZF1DXcBWIGoYBM5M", - "name": "Chill Hits", - "description": "Relax and unwind with the latest chill hits.", - "owner": { - "id": "spotify_user_123", - "display_name": "Spotify User" - }, - "tracks": { - "items": [ - { - "track": { - "id": "3n3Ppam7vgaVa1iaRUc9Lp", - "name": "Lose Yourself", - "artists": [ - { - "id": "1dfeR4HaWDbWqFHLkxsg1d", - "name": "Eminem" - } - ], - "album": { - "id": "1ATL5GLyefJaxhQzSPVrLX", - "name": "8 Mile" - }, - "duration_ms": 326000, - "track_number": 1, - "disc_number": 1, - "preview_url": "https://p.scdn.co/mp3-preview/lose-yourself.mp3", - "uri": "spotify:track:3n3Ppam7vgaVa1iaRUc9Lp" - } - }, - { - "track": { - "id": "7ouMYWpwJ422jRcDASZB7P", - "name": "Blinding Lights", - "artists": [ - { - "id": "0fW8E0XdT6aG9aFh6jGpYo", - "name": "The Weeknd" - } - ], - "album": { - "id": "1ATL5GLyefJaxhQzSPVrLX", - "name": "After Hours" - }, - "duration_ms": 200040, - "track_number": 9, - "disc_number": 1, - "preview_url": "https://p.scdn.co/mp3-preview/blinding-lights.mp3", - "uri": "spotify:track:7ouMYWpwJ422jRcDASZB7P" - } - }, - { - "track": { - "id": "4uLU6hMCjMI75M1A2tKUQC", - "name": "Shape of You", - "artists": [ - { - "id": "6eUKZXaKkcviH0Ku9w2n3V", - "name": "Ed Sheeran" - } - ], - "album": { - "id": "3fMbdgg4jU18AjLCKBhRSm", - "name": "Divide" - }, - "duration_ms": 233713, - "track_number": 4, - "disc_number": 1, - "preview_url": "https://p.scdn.co/mp3-preview/shape-of-you.mp3", - "uri": "spotify:track:4uLU6hMCjMI75M1A2tKUQC" - } - }, - { - "track": { - "id": "0VjIjW4GlUZAMYd2vXMi3b", - "name": "Levitating", - "artists": [ - { - "id": "4tZwfgrHOc3mvqYlEYSvVi", - "name": "Dua Lipa" - } - ], - "album": { - "id": "7dGJo4pcD2V6oG8kP0tJRR", - "name": "Future Nostalgia" - }, - "duration_ms": 203693, - "track_number": 5, - "disc_number": 1, - "preview_url": "https://p.scdn.co/mp3-preview/levitating.mp3", - "uri": "spotify:track:0VjIjW4GlUZAMYd2vXMi3b" - } - }, - { - "track": { - "id": "6habFhsOp2NvshLv26DqMb", - "name": "Sunflower", - "artists": [ - { - "id": "1dfeR4HaWDbWqFHLkxsg1d", - "name": "Post Malone" - }, - { - "id": "0C8ZW7ezQVs4URX5aX7Kqx", - "name": "Swae Lee" - } - ], - "album": { - "id": "6k3hyp4efgfHP5GMVd3Agw", - "name": "Spider-Man: Into the Spider-Verse (Soundtrack)" - }, - "duration_ms": 158000, - "track_number": 3, - "disc_number": 1, - "preview_url": "https://p.scdn.co/mp3-preview/sunflower.mp3", - "uri": "spotify:track:6habFhsOp2NvshLv26DqMb" - } - } - ] - }, - "uri": "spotify:playlist:37i9dQZF1DXcBWIGoYBM5M" - } -} \ No newline at end of file diff --git a/api/backend/simple/simple_routes.py b/api/backend/simple/simple_routes.py deleted file mode 100644 index 8685fbac76..0000000000 --- a/api/backend/simple/simple_routes.py +++ /dev/null @@ -1,48 +0,0 @@ -from flask import Blueprint, request, jsonify, make_response, current_app, redirect, url_for -import json -from backend.db_connection import db -from backend.simple.playlist import sample_playlist_data - -# This blueprint handles some basic routes that you can use for testing -simple_routes = Blueprint('simple_routes', __name__) - - -# ------------------------------------------------------------ -# / is the most basic route -# Once the api container is started, in a browser, go to -# localhost:4000/playlist -@simple_routes.route('/') -def welcome(): - current_app.logger.info('GET / handler') - welcome_message = '

Welcome to the CS 3200 Project Template REST API' - response = make_response(welcome_message) - response.status_code = 200 - return response - -# ------------------------------------------------------------ -# /playlist returns the sample playlist data contained in playlist.py -# (imported above) -@simple_routes.route('/playlist') -def get_playlist_data(): - current_app.logger.info('GET /playlist handler') - response = make_response(jsonify(sample_playlist_data)) - response.status_code = 200 - return response - -# ------------------------------------------------------------ -@simple_routes.route('/niceMesage', methods = ['GET']) -def affirmation(): - message = ''' -

Think about it...

-
- You only need to be 1% better today than you were yesterday! - ''' - response = make_response(message) - response.status_code = 200 - return response - -# ------------------------------------------------------------ -# Demonstrates how to redirect from one route to another. -@simple_routes.route('/message') -def mesage(): - return redirect(url_for(affirmation)) \ No newline at end of file diff --git a/app/src/.streamlit/config.toml b/app/src/.streamlit/config.toml index bb28be97de..73191c466a 100644 --- a/app/src/.streamlit/config.toml +++ b/app/src/.streamlit/config.toml @@ -15,6 +15,5 @@ showSidebarNavigation = false [theme] # Setting some basic config options related to the theme of the app base="light" -primaryColor="#6550e6" -font="monospace" +primaryColor="#ea0000" diff --git a/api/backend/ml_models/__init__.py b/app/src/CoopPlatform.db similarity index 100% rename from api/backend/ml_models/__init__.py rename to app/src/CoopPlatform.db diff --git a/app/src/Home.py b/app/src/Home.py index ef0f7b19ad..818286fc68 100644 --- a/app/src/Home.py +++ b/app/src/Home.py @@ -1,11 +1,14 @@ ################################################## -# This is the main/entry-point file for the +# This is the main/entry-point file for the # sample application for your project ################################################## # Set up basic logging infrastructure import logging -logging.basicConfig(format='%(filename)s:%(lineno)s:%(levelname)s -- %(message)s', level=logging.INFO) + +logging.basicConfig( + format="%(filename)s:%(lineno)s:%(levelname)s -- %(message)s", level=logging.INFO +) logger = logging.getLogger(__name__) # import the main streamlit library as well @@ -15,15 +18,15 @@ # streamlit supports reguarl and wide layout (how the controls # are organized/displayed on the screen). -st.set_page_config(layout = 'wide') +st.set_page_config(layout="wide") -# If a user is at this page, we assume they are not +# If a user is at this page, we assume they are not # authenticated. So we change the 'authenticated' value -# in the streamlit session_state to false. -st.session_state['authenticated'] = False +# in the streamlit session_state to false. +st.session_state["authenticated"] = False # Use the SideBarLinks function from src/modules/nav.py to control -# the links displayed on the left-side panel. +# the links displayed on the left-side panel. # IMPORTANT: ensure src/.streamlit/config.toml sets # showSidebarNavigation = false in the [client] section SideBarLinks(show_home=True) @@ -32,46 +35,55 @@ # The major content of this page # *************************************************** -# set the title of the page and provide a simple prompt. +# set the title of the page and provide a simple prompt. logger.info("Loading the Home page of the app") -st.title('CS 3200 Sample Semester Project App') -st.write('\n\n') -st.write('### HI! As which user would you like to log in?') +st.title("Welcome to COUPE") +st.write("\n\n") +st.write("### HI! As which user would you like to log in?") # For each of the user personas for which we are implementing -# functionality, we put a button on the screen that the user -# can click to MIMIC logging in as that mock user. +# functionality, we put a button on the screen that the user +# can click to MIMIC logging in as that mock user. -if st.button("Act as John, a Political Strategy Advisor", - type = 'primary', - use_container_width=True): - # when user clicks the button, they are now considered authenticated - st.session_state['authenticated'] = True - # we set the role of the current user - st.session_state['role'] = 'pol_strat_advisor' - # we add the first name of the user (so it can be displayed on - # subsequent pages). - st.session_state['first_name'] = 'John' - # finally, we ask streamlit to switch to another page, in this case, the - # landing page for this particular user type - logger.info("Logging in as Political Strategy Advisor Persona") - st.switch_page('pages/00_Pol_Strat_Home.py') +if st.button( + "Act as Sebastian, a Student Co-Op Searcher", + type="primary", + use_container_width=True, +): + st.session_state["authenticated"] = True + st.session_state["role"] = "Co-Op searcher" + st.session_state["first_name"] = "Sebastian" + st.session_state["id"] = "5" + logger.info("Logging in as Co-Op Searcher") + st.switch_page("pages/100_CoOp_Searcher_Home.py") -if st.button('Act as Mohammad, an USAID worker', - type = 'primary', - use_container_width=True): - st.session_state['authenticated'] = True - st.session_state['role'] = 'usaid_worker' - st.session_state['first_name'] = 'Mohammad' - st.switch_page('pages/10_USAID_Worker_Home.py') +if st.button( + "Act as Riley, a student Co-Op Reviewer", type="primary", use_container_width=True +): + st.session_state["authenticated"] = True + st.session_state["role"] = "Co-Op reviewer" + st.session_state["id"] = "1" + st.session_state["first_name"] = "Riley" -if st.button('Act as System Administrator', - type = 'primary', - use_container_width=True): - st.session_state['authenticated'] = True - st.session_state['role'] = 'administrator' - st.session_state['first_name'] = 'SysAdmin' - st.switch_page('pages/20_Admin_Home.py') + logger.info("Logging in as Co Op Reviewer") + st.switch_page("pages/200_CoOp_Reviewer_Home.py") +if st.button( + "Act as Alex Admin, System Administrator", type="primary", use_container_width=True +): + st.session_state["authenticated"] = True + st.session_state["role"] = "administrator" + st.session_state["first_name"] = "Alex" + logger.info("Logging in as Alex Admin, System Administrator") + st.switch_page("pages/300_System_Administrator_Home.py") +if st.button( + "Act as Annalise, an Analysist of Site Performance", + type="primary", + use_container_width=True, +): + st.session_state["authenticated"] = True + st.session_state["role"] = "analyst" + st.session_state["first_name"] = "Annalise" + st.switch_page("pages/400_Analyst_Home.py") diff --git a/app/src/modules/nav.py b/app/src/modules/nav.py index cb31d3bf67..8c5ace0a8c 100644 --- a/app/src/modules/nav.py +++ b/app/src/modules/nav.py @@ -47,13 +47,25 @@ def ClassificationNav(): "pages/13_Classification.py", label="Classification Demo", icon="🌺" ) +#### ------------------------ Co-Op Searcher Role ------------------------ +def SearcherPageNav(): + st.sidebar.page_link("pages/100_CoOp_Searcher_Home.py", label="Co-Op Searcher", icon="🔍") + + +#### ------------------------ Co Op Reviewer Role ------------------------ +def ReviewerPageNav(): + st.sidebar.page_link("pages/200_CoOp_Reviewer_Home.py", label="Co-Op Reviewer", icon="📝") + + #### ------------------------ System Admin Role ------------------------ def AdminPageNav(): - st.sidebar.page_link("pages/20_Admin_Home.py", label="System Admin", icon="🖥️") - st.sidebar.page_link( - "pages/21_ML_Model_Mgmt.py", label="ML Model Management", icon="🏢" - ) + st.sidebar.page_link("pages/300_System_Administrator_Home.py", label="System Admin", icon="🖥️") + + #### ------------------------ System Admin Role ------------------------ +def AnalyistPageNav(): + st.sidebar.page_link("pages/400_Analyst_Home.py", label="Site Analyst", icon="📊") + # --------------------------------Links Function ----------------------------------------------- @@ -84,15 +96,20 @@ def SideBarLinks(show_home=False): MapDemoNav() # If the user role is usaid worker, show the Api Testing page - if st.session_state["role"] == "usaid_worker": - PredictionNav() - ApiTestNav() - ClassificationNav() + if st.session_state["role"] == "Co-Op reviewer": + ReviewerPageNav() + + # If the user role is usaid worker, show the Api Testing page + if st.session_state["role"] == "Co-Op searcher": + SearcherPageNav() # If the user is an administrator, give them access to the administrator pages if st.session_state["role"] == "administrator": AdminPageNav() + if st.session_state["role"] == "analyst": + AnalyistPageNav() + # Always show the About page at the bottom of the list of links AboutPageNav() diff --git a/app/src/my_custom_database.db b/app/src/my_custom_database.db new file mode 100644 index 0000000000..e69de29bb2 diff --git a/app/src/pages/01_World_Bank_Viz.py b/app/src/pages/01_World_Bank_Viz.py deleted file mode 100644 index a34cbb1529..0000000000 --- a/app/src/pages/01_World_Bank_Viz.py +++ /dev/null @@ -1,41 +0,0 @@ -import logging -logger = logging.getLogger(__name__) -import pandas as pd -import streamlit as st -from streamlit_extras.app_logo import add_logo -import world_bank_data as wb -import matplotlib.pyplot as plt -import numpy as np -import plotly.express as px -from modules.nav import SideBarLinks - -# Call the SideBarLinks from the nav module in the modules directory -SideBarLinks() - -# set the header of the page -st.header('World Bank Data') - -# You can access the session state to make a more customized/personalized app experience -st.write(f"### Hi, {st.session_state['first_name']}.") - -# get the countries from the world bank data -with st.echo(code_location='above'): - countries:pd.DataFrame = wb.get_countries() - - st.dataframe(countries) - -# the with statment shows the code for this block above it -with st.echo(code_location='above'): - arr = np.random.normal(1, 1, size=100) - test_plot, ax = plt.subplots() - ax.hist(arr, bins=20) - - st.pyplot(test_plot) - - -with st.echo(code_location='above'): - slim_countries = countries[countries['incomeLevel'] != 'Aggregates'] - data_crosstab = pd.crosstab(slim_countries['region'], - slim_countries['incomeLevel'], - margins = False) - st.table(data_crosstab) diff --git a/app/src/pages/02_Map_Demo.py b/app/src/pages/02_Map_Demo.py deleted file mode 100644 index 5ca09a9633..0000000000 --- a/app/src/pages/02_Map_Demo.py +++ /dev/null @@ -1,104 +0,0 @@ -import logging -logger = logging.getLogger(__name__) -import streamlit as st -from streamlit_extras.app_logo import add_logo -import pandas as pd -import pydeck as pdk -from urllib.error import URLError -from modules.nav import SideBarLinks - -SideBarLinks() - -# add the logo -add_logo("assets/logo.png", height=400) - -# set up the page -st.markdown("# Mapping Demo") -st.sidebar.header("Mapping Demo") -st.write( - """This Mapping Demo is from the Streamlit Documentation. It shows how to use -[`st.pydeck_chart`](https://docs.streamlit.io/library/api-reference/charts/st.pydeck_chart) -to display geospatial data.""" -) - - -@st.cache_data -def from_data_file(filename): - url = ( - "http://raw.githubusercontent.com/streamlit/" - "example-data/master/hello/v1/%s" % filename - ) - return pd.read_json(url) - - -try: - ALL_LAYERS = { - "Bike Rentals": pdk.Layer( - "HexagonLayer", - data=from_data_file("bike_rental_stats.json"), - get_position=["lon", "lat"], - radius=200, - elevation_scale=4, - elevation_range=[0, 1000], - extruded=True, - ), - "Bart Stop Exits": pdk.Layer( - "ScatterplotLayer", - data=from_data_file("bart_stop_stats.json"), - get_position=["lon", "lat"], - get_color=[200, 30, 0, 160], - get_radius="[exits]", - radius_scale=0.05, - ), - "Bart Stop Names": pdk.Layer( - "TextLayer", - data=from_data_file("bart_stop_stats.json"), - get_position=["lon", "lat"], - get_text="name", - get_color=[0, 0, 0, 200], - get_size=15, - get_alignment_baseline="'bottom'", - ), - "Outbound Flow": pdk.Layer( - "ArcLayer", - data=from_data_file("bart_path_stats.json"), - get_source_position=["lon", "lat"], - get_target_position=["lon2", "lat2"], - get_source_color=[200, 30, 0, 160], - get_target_color=[200, 30, 0, 160], - auto_highlight=True, - width_scale=0.0001, - get_width="outbound", - width_min_pixels=3, - width_max_pixels=30, - ), - } - st.sidebar.markdown("### Map Layers") - selected_layers = [ - layer - for layer_name, layer in ALL_LAYERS.items() - if st.sidebar.checkbox(layer_name, True) - ] - if selected_layers: - st.pydeck_chart( - pdk.Deck( - map_style="mapbox://styles/mapbox/light-v9", - initial_view_state={ - "latitude": 37.76, - "longitude": -122.4, - "zoom": 11, - "pitch": 50, - }, - layers=selected_layers, - ) - ) - else: - st.error("Please choose at least one layer above.") -except URLError as e: - st.error( - """ - **This demo requires internet access.** - Connection error: %s - """ - % e.reason - ) diff --git a/app/src/pages/03_Simple_Chat_Bot.py b/app/src/pages/03_Simple_Chat_Bot.py deleted file mode 100644 index fa8db58e84..0000000000 --- a/app/src/pages/03_Simple_Chat_Bot.py +++ /dev/null @@ -1,66 +0,0 @@ -import logging -logger = logging.getLogger(__name__) -import streamlit as st -from streamlit_extras.app_logo import add_logo -import numpy as np -import random -import time -from modules.nav import SideBarLinks - -SideBarLinks() - -def response_generator(): - response = random.choice ( - [ - "Hello there! How can I assist you today?", - "Hi, human! Is there anything I can help you with?", - "Do you need help?", - ] - ) - for word in response.split(): - yield word + " " - time.sleep(0.05) -#----------------------------------------------------------------------- - -st.set_page_config (page_title="Sample Chat Bot", page_icon="🤖") -add_logo("assets/logo.png", height=400) - -st.title("Echo Bot 🤖") - -st.markdown(""" - Currently, this chat bot only returns a random message from the following list: - - Hello there! How can I assist you today? - - Hi, human! Is there anything I can help you with? - - Do you need help? - """ - ) - - -# Initialize chat history -if "messages" not in st.session_state: - st.session_state.messages = [] - -# Display chat message from history on app rerun -for message in st.session_state.messages: - with st.chat_message(message["role"]): - st.markdown(message["content"]) - -# React to user input -if prompt := st.chat_input("What is up?"): - # Display user message in chat message container - with st.chat_message("user"): - st.markdown(prompt) - - # Add user message to chat history - st.session_state.messages.append({"role": "user", "content": prompt}) - - response = f"Echo: {prompt}" - - # Display assistant response in chat message container - with st.chat_message("assistant"): - # st.markdown(response) - response = st.write_stream(response_generator()) - - # Add assistant response to chat history - st.session_state.messages.append({"role": "assistant", "content": response}) - diff --git a/app/src/pages/04_Prediction.py b/app/src/pages/04_Prediction.py deleted file mode 100644 index a5a322a2f4..0000000000 --- a/app/src/pages/04_Prediction.py +++ /dev/null @@ -1,38 +0,0 @@ -import logging -logger = logging.getLogger(__name__) - -import streamlit as st -from modules.nav import SideBarLinks -import requests - -st.set_page_config(layout = 'wide') - -# Display the appropriate sidebar links for the role of the logged in user -SideBarLinks() - -st.title('Prediction with Regression') - -# create a 2 column layout -col1, col2 = st.columns(2) - -# add one number input for variable 1 into column 1 -with col1: - var_01 = st.number_input('Variable 01:', - step=1) - -# add another number input for variable 2 into column 2 -with col2: - var_02 = st.number_input('Variable 02:', - step=1) - -logger.info(f'var_01 = {var_01}') -logger.info(f'var_02 = {var_02}') - -# add a button to use the values entered into the number field to send to the -# prediction function via the REST API -if st.button('Calculate Prediction', - type='primary', - use_container_width=True): - results = requests.get(f'http://api:4000/c/prediction/{var_01}/{var_02}').json() - st.dataframe(results) - \ No newline at end of file diff --git a/app/src/pages/1001_Company_Position_Reviews.py b/app/src/pages/1001_Company_Position_Reviews.py new file mode 100644 index 0000000000..6b33385e15 --- /dev/null +++ b/app/src/pages/1001_Company_Position_Reviews.py @@ -0,0 +1,124 @@ +import logging +import requests +import streamlit as st +from modules.nav import SideBarLinks + +# Set up logger +logger = logging.getLogger(__name__) + +# Call the SideBarLinks from the nav module in the modules directory +SideBarLinks() + +# Set the header of the page +st.header("Find Company and Position Reviews") + +# Access the session state for personalization +st.write(f"### Hi, {st.session_state.get('first_name', 'Guest')}!") + + +# Function to add a comment to a review +def add_comment(review_id, user_id, content): + try: + response = requests.post( + f"http://api:4000/r/addComment", + json={ + "reviewID": review_id, + "userID": user_id, + "content": content, + }, + ) + response.raise_for_status() + return response.json() + except requests.exceptions.RequestException as e: + logger.error(f"Failed to add comment to review {review_id}: {e}") + return None + + +# Fetch companies with reviews +try: + results = requests.get("http://api:4000/s/companiesWithReviews").json() + company_names = [item["CompanyName"] for item in results] +except Exception as e: + st.error("Failed to fetch company data. Please try again later.") + logger.error(f"Error fetching companies: {e}") + company_names = [] + +# Dropdown for selecting a company +if company_names: + selected_company = st.selectbox("Select a Company:", company_names) + + if selected_company: + # Fetch reviews for the selected company + try: + reviews = requests.get( + f"http://api:4000/s/reviewsForCompany/{selected_company}" + ).json() + except Exception as e: + st.error("Failed to fetch reviews. Please try again later.") + logger.error(f"Error fetching reviews: {e}") + reviews = [] + + # Display reviews + if reviews: + st.write("### Reviews for this Company:") + + for review in reviews: + # Expander for each review + with st.expander(f"Review by User {review['userID']}", expanded=True): + # Review details + st.write(f"**{review['heading']}**") + st.write(f"**Role ID:** {review['roleID']}") + st.write(f"**Published At:** {review['publishedAt']}") + st.write(f"**Content:** {review['content']}") + st.write( + f"**Views:** {review['views']} | **Likes:** {review['likes']}" + ) + + st.divider() # Visual separator between review and comments + + # Fetch and display comments + try: + comments = requests.get( + f"http://api:4000/r/commentsByReview/{review['reviewID']}" + ).json() + except Exception as e: + st.error("Failed to fetch comments. Please try again later.") + logger.error(f"Error fetching comments: {e}") + comments = [] + + if comments: + st.write("#### Comments:") + for comment in comments: + st.write( + f"**User {comment['userID']}:** {comment['content']}" + ) + st.write(f"*Likes:* {comment['likes']}") + st.divider() + else: + st.write("No comments yet.") + + # Add comment form + with st.form(key=f"add_comment_form_{review['reviewID']}"): + st.write("#### Add a Comment") + new_comment_content = st.text_area( + "Your Comment", key=f"comment_content_{review['reviewID']}" + ) + submit_comment_button = st.form_submit_button( + label="Post Comment" + ) + + if submit_comment_button and new_comment_content: + user_id = st.session_state.get( + "id" + ) # Assuming user ID is stored in session state + comment_response = add_comment( + review["reviewID"], user_id, new_comment_content + ) + if comment_response: + st.success("Comment added successfully!") + else: + st.error("Failed to post comment. Please try again.") + else: + st.write("No reviews available for this company.") +else: + st.write("No companies available to select.") diff --git a/app/src/pages/1002_Interview_Information.py b/app/src/pages/1002_Interview_Information.py new file mode 100644 index 0000000000..e0a780549a --- /dev/null +++ b/app/src/pages/1002_Interview_Information.py @@ -0,0 +1,33 @@ +import logging +logger = logging.getLogger(__name__) +import streamlit as st +from modules.nav import SideBarLinks +import requests + +# Call the SideBarLinks from the nav module in the modules directory +SideBarLinks() + +# set the header of the page +st.header('Find Company and Position Reviews') + +# You can access the session state to make a more customized/personalized app experience +st.write(f"### Hi, {st.session_state['first_name']}.") + +results = requests.get('http://api:4000/s/companiesWithReviews').json() + +# Extract company names into a list +company_names = [item["CompanyName"] for item in results] + +# Display a dropdown menu with company names +selected_company = st.selectbox("Select a Company:", company_names) + +# Fetch the reviews for the selected company by company name +reviews = requests.get(f'http://api:4000/s/interviewReportsForCompany/{selected_company}').json() + +# Display the reviews if available +if reviews: + st.write("### Interview Feedback for this Company:") + # Display reviews in a table format + st.dataframe(reviews, use_container_width=True) +else: + st.write("No interview feedback available for this company.") diff --git a/app/src/pages/1003_Skill_Matching.py b/app/src/pages/1003_Skill_Matching.py new file mode 100644 index 0000000000..9e7fdef49d --- /dev/null +++ b/app/src/pages/1003_Skill_Matching.py @@ -0,0 +1,56 @@ +import logging +import requests +import streamlit as st +from modules.nav import SideBarLinks +import pandas as pd + +# Call the SideBarLinks from the nav module in the modules directory +SideBarLinks() + +# Set the header of the page +st.header("Find positions with matching skills") + +# You can access the session state to make a more customized/personalized app experience +st.write(f"### Hi, {st.session_state['first_name']}.") + +# Fetch the list of possible skills from the API +skills = requests.get("http://api:4000/s/possibleSkills").json() + + +# Extract and split skillsRequired into a set of unique skills +skillsRequired = set() +for item in skills: + # Split the skills by commas and strip any surrounding whitespace + skills = item["skillsRequired"].split(",") + # Add each skill to the set + skillsRequired.update(skill.strip() for skill in skills) + + +# Display a dropdown menu with company names +selected_skill = st.selectbox("Select a Skill:", skillsRequired) + +# If the user enters a skill +if selected_skill: + try: + # Send the GET request to the API endpoint with the skill + response = requests.get(f"http://api:4000/s/rolesForSkill/{selected_skill}") + + # Check if the response is successful (status code 200) + if response.status_code == 200: + data = response.json() + + # If there is data, display it in a table + if data: + # Convert the data into a Pandas DataFrame for easy display + df = pd.DataFrame( + data, columns=["roleName", "companyName", "skillsRequired"] + ) + st.write(f"### Roles that require the skill: {selected_skill}") + st.dataframe(df) # Display the DataFrame in Streamlit table format + else: + st.warning(f"No roles found that require the skill '{selected_skill}'.") + else: + st.error("Failed to fetch data from the server. Please try again later.") + + except requests.exceptions.RequestException as e: + st.error(f"Error fetching data: {e}") diff --git a/app/src/pages/1004_Enter_Interview_Info.py b/app/src/pages/1004_Enter_Interview_Info.py new file mode 100644 index 0000000000..815f1c008e --- /dev/null +++ b/app/src/pages/1004_Enter_Interview_Info.py @@ -0,0 +1,105 @@ +import logging +import pandas as pd +import streamlit as st +from streamlit_extras.app_logo import add_logo +import requests +import plotly.express as px +from modules.nav import SideBarLinks + +# Initialize logger +logger = logging.getLogger(__name__) + +# Call the SideBarLinks from the nav module in the modules directory +SideBarLinks() + +# Set the header of the page +st.header('My Reviews') + +# Display personalized greeting +st.write(f"### Hi, {st.session_state['first_name']}!") + +# Fetch the user's reviews +try: + response = requests.get(f'http://api:4000/r/reviewsByUser/{st.session_state["id"]}') + response.raise_for_status() + reviews = response.json() +except requests.exceptions.RequestException as e: + st.error(f"Failed to fetch reviews: {e}") + reviews = [] + +# Function to fetch comments for a review +def fetch_comments(review_id): + try: + response = requests.get(f"http://api:4000/r/commentsByReview/{review_id}") + response.raise_for_status() + return response.json() + except requests.exceptions.RequestException as e: + logger.error(f"Failed to fetch comments for review {review_id}: {e}") + return [] + +# Fetch companies +try: + companies_response = requests.get("http://api:4000/r/companies") + companies_response.raise_for_status() + companies = companies_response.json() +except requests.exceptions.RequestException as e: + st.error(f"Error fetching companies: {e}") + companies = [] + +# Dropdown to select a company +if companies: + company_names = [company['company_name'] for company in companies] + selected_company_name = st.selectbox("Select a Company", company_names) + + # Fetch the selected company ID + selected_company = next(company for company in companies if company['company_name'] == selected_company_name) + selected_company_id = selected_company['companyID'] + + # Fetch roles for the selected company + try: + roles_response = requests.get(f"http://api:4000/r/rolesByCompany/{selected_company_id}") + roles_response.raise_for_status() + roles = roles_response.json() + except requests.exceptions.RequestException as e: + st.error(f"Error fetching roles: {e}") + roles = [] + + # Dropdown to select a role + if roles: + role_names = [role['roleName'] for role in roles] + selected_role_name = st.selectbox("Select a Role", role_names) + + # Fetch the selected role ID + selected_role = next(role for role in roles if role['roleName'] == selected_role_name) + selected_role_id = selected_role['roleID'] + + # Review submission form + with st.form("new_review_form"): + st.subheader("Submit an Interview Experience") + + # Pre-fill the form with role info + st.write(f"Reviewing interview for: {selected_role_name} at {selected_company_name}") + + new_heading = st.text_input("Review Heading") + new_content = st.text_area("Content") + new_review_type = "InterviewReport" + + submit_button = st.form_submit_button("Submit Review") + + # Handle form submission + if submit_button: + review_data = { + "roleID": selected_role_id, + "heading": new_heading, + "content": new_content, + "reviewType": new_review_type, + "userID": st.session_state['id'] + } + + try: + submit_response = requests.post("http://api:4000/r/submitReview", json=review_data) + submit_response.raise_for_status() + st.success("Review submitted successfully!") + except requests.exceptions.RequestException as e: + st.error(f"Failed to submit review: {e}") + diff --git a/app/src/pages/100_CoOp_Searcher_Home.py b/app/src/pages/100_CoOp_Searcher_Home.py new file mode 100644 index 0000000000..c5006cd2a3 --- /dev/null +++ b/app/src/pages/100_CoOp_Searcher_Home.py @@ -0,0 +1,34 @@ +import logging +logger = logging.getLogger(__name__) +import streamlit as st +from modules.nav import SideBarLinks + +st.set_page_config(layout = 'wide') + +# Show appropriate sidebar links for the role of the currently logged in user +SideBarLinks() + +st.title(f"Welcome Co-op searcher, {st.session_state['first_name']}.") +st.write('') +st.write('') +st.write('### What would you like to do today?') + +if st.button('View Company and Position Reviews', + type='primary', + use_container_width=True): + st.switch_page('pages/1001_Company_Position_Reviews.py') + +if st.button('View Interview Information', + type='primary', + use_container_width=True): + st.switch_page('pages/1002_Interview_Information.py') + +if st.button('View Positions with Matching Skills', + type='primary', + use_container_width=True): + st.switch_page('pages/1003_Skill_Matching.py') + +if st.button('Enter My Interview Experiences', + type='primary', + use_container_width=True): + st.switch_page('pages/1004_Enter_Interview_Info.py') diff --git a/app/src/pages/10_USAID_Worker_Home.py b/app/src/pages/10_USAID_Worker_Home.py deleted file mode 100644 index d7b230384c..0000000000 --- a/app/src/pages/10_USAID_Worker_Home.py +++ /dev/null @@ -1,30 +0,0 @@ -import logging -logger = logging.getLogger(__name__) - -import streamlit as st -from modules.nav import SideBarLinks - -st.set_page_config(layout = 'wide') - -# Show appropriate sidebar links for the role of the currently logged in user -SideBarLinks() - -st.title(f"Welcome USAID Worker, {st.session_state['first_name']}.") -st.write('') -st.write('') -st.write('### What would you like to do today?') - -if st.button('Predict Value Based on Regression Model', - type='primary', - use_container_width=True): - st.switch_page('pages/11_Prediction.py') - -if st.button('View the Simple API Demo', - type='primary', - use_container_width=True): - st.switch_page('pages/12_API_Test.py') - -if st.button("View Classification Demo", - type='primary', - use_container_width=True): - st.switch_page('pages/13_Classification.py') \ No newline at end of file diff --git a/app/src/pages/11_Prediction.py b/app/src/pages/11_Prediction.py deleted file mode 100644 index a5a322a2f4..0000000000 --- a/app/src/pages/11_Prediction.py +++ /dev/null @@ -1,38 +0,0 @@ -import logging -logger = logging.getLogger(__name__) - -import streamlit as st -from modules.nav import SideBarLinks -import requests - -st.set_page_config(layout = 'wide') - -# Display the appropriate sidebar links for the role of the logged in user -SideBarLinks() - -st.title('Prediction with Regression') - -# create a 2 column layout -col1, col2 = st.columns(2) - -# add one number input for variable 1 into column 1 -with col1: - var_01 = st.number_input('Variable 01:', - step=1) - -# add another number input for variable 2 into column 2 -with col2: - var_02 = st.number_input('Variable 02:', - step=1) - -logger.info(f'var_01 = {var_01}') -logger.info(f'var_02 = {var_02}') - -# add a button to use the values entered into the number field to send to the -# prediction function via the REST API -if st.button('Calculate Prediction', - type='primary', - use_container_width=True): - results = requests.get(f'http://api:4000/c/prediction/{var_01}/{var_02}').json() - st.dataframe(results) - \ No newline at end of file diff --git a/app/src/pages/12_API_Test.py b/app/src/pages/12_API_Test.py deleted file mode 100644 index 74883c5a85..0000000000 --- a/app/src/pages/12_API_Test.py +++ /dev/null @@ -1,25 +0,0 @@ -import logging -logger = logging.getLogger(__name__) -import streamlit as st -import requests -from streamlit_extras.app_logo import add_logo -from modules.nav import SideBarLinks - -SideBarLinks() - -st.write("# Accessing a REST API from Within Streamlit") - -""" -Simply retrieving data from a REST api running in a separate Docker Container. - -If the container isn't running, this will be very unhappy. But the Streamlit app -should not totally die. -""" -data = {} -try: - data = requests.get('http://api:4000/data').json() -except: - st.write("**Important**: Could not connect to sample api, so using dummy data.") - data = {"a":{"b": "123", "c": "hello"}, "z": {"b": "456", "c": "goodbye"}} - -st.dataframe(data) diff --git a/app/src/pages/13_Classification.py b/app/src/pages/13_Classification.py deleted file mode 100644 index be2535c49d..0000000000 --- a/app/src/pages/13_Classification.py +++ /dev/null @@ -1,57 +0,0 @@ -import logging -logger = logging.getLogger(__name__) -import streamlit as st -import pandas as pd -from sklearn import datasets -from sklearn.ensemble import RandomForestClassifier -from streamlit_extras.app_logo import add_logo -from modules.nav import SideBarLinks - -SideBarLinks() - -st.write(""" -# Simple Iris Flower Prediction App - -This example is borrowed from [The Data Professor](https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_7_classification_iris) - -This app predicts the **Iris flower** type! -""") - -st.sidebar.header('User Input Parameters') - -def user_input_features(): - sepal_length = st.sidebar.slider('Sepal length', 4.3, 7.9, 5.4) - sepal_width = st.sidebar.slider('Sepal width', 2.0, 4.4, 3.4) - petal_length = st.sidebar.slider('Petal length', 1.0, 6.9, 1.3) - petal_width = st.sidebar.slider('Petal width', 0.1, 2.5, 0.2) - data = {'sepal_length': sepal_length, - 'sepal_width': sepal_width, - 'petal_length': petal_length, - 'petal_width': petal_width} - features = pd.DataFrame(data, index=[0]) - return features - -df = user_input_features() - -st.subheader('User Input parameters') -st.write(df) - -iris = datasets.load_iris() -X = iris.data -Y = iris.target - -clf = RandomForestClassifier() -clf.fit(X, Y) - -prediction = clf.predict(df) -prediction_proba = clf.predict_proba(df) - -st.subheader('Class labels and their corresponding index number') -st.write(iris.target_names) - -st.subheader('Prediction') -st.write(iris.target_names[prediction]) -#st.write(prediction) - -st.subheader('Prediction Probability') -st.write(prediction_proba) \ No newline at end of file diff --git a/app/src/pages/2001_Review_Form.py b/app/src/pages/2001_Review_Form.py new file mode 100644 index 0000000000..ab422d5457 --- /dev/null +++ b/app/src/pages/2001_Review_Form.py @@ -0,0 +1,105 @@ +import logging +import pandas as pd +import streamlit as st +from streamlit_extras.app_logo import add_logo +import requests +import plotly.express as px +from modules.nav import SideBarLinks + +# Initialize logger +logger = logging.getLogger(__name__) + +# Call the SideBarLinks from the nav module in the modules directory +SideBarLinks() + +# Set the header of the page +st.header('My Reviews') + +# Display personalized greeting +st.write(f"### Hi, {st.session_state['first_name']}!") + +# Fetch the user's reviews +try: + response = requests.get(f'http://api:4000/r/reviewsByUser/{st.session_state["id"]}') + response.raise_for_status() + reviews = response.json() +except requests.exceptions.RequestException as e: + st.error(f"Failed to fetch reviews: {e}") + reviews = [] + +# Function to fetch comments for a review +def fetch_comments(review_id): + try: + response = requests.get(f"http://api:4000/r/commentsByReview/{review_id}") + response.raise_for_status() + return response.json() + except requests.exceptions.RequestException as e: + logger.error(f"Failed to fetch comments for review {review_id}: {e}") + return [] + +# Fetch companies +try: + companies_response = requests.get("http://api:4000/r/companies") + companies_response.raise_for_status() + companies = companies_response.json() +except requests.exceptions.RequestException as e: + st.error(f"Error fetching companies: {e}") + companies = [] + +# Dropdown to select a company +if companies: + company_names = [company['company_name'] for company in companies] + selected_company_name = st.selectbox("Select a Company", company_names) + + # Fetch the selected company ID + selected_company = next(company for company in companies if company['company_name'] == selected_company_name) + selected_company_id = selected_company['companyID'] + + # Fetch roles for the selected company + try: + roles_response = requests.get(f"http://api:4000/r/rolesByCompany/{selected_company_id}") + roles_response.raise_for_status() + roles = roles_response.json() + except requests.exceptions.RequestException as e: + st.error(f"Error fetching roles: {e}") + roles = [] + + # Dropdown to select a role + if roles: + role_names = [role['roleName'] for role in roles] + selected_role_name = st.selectbox("Select a Role", role_names) + + # Fetch the selected role ID + selected_role = next(role for role in roles if role['roleName'] == selected_role_name) + selected_role_id = selected_role['roleID'] + + # Review submission form + with st.form("new_review_form"): + st.subheader("Submit a New Review") + + # Pre-fill the form with role info + st.write(f"Reviewing role: {selected_role_name} at {selected_company_name}") + + new_review_type = st.selectbox("Review Type", ["Experience", "InterviewReport", "Feedback", "Other"]) + new_heading = st.text_input("Review Heading") + new_content = st.text_area("Content") + + submit_button = st.form_submit_button("Submit Review") + + # Handle form submission + if submit_button: + review_data = { + "roleID": selected_role_id, + "heading": new_heading, + "content": new_content, + "reviewType": new_review_type, + "userID": st.session_state['id'] + } + + try: + submit_response = requests.post("http://api:4000/r/submitReview", json=review_data) + submit_response.raise_for_status() + st.success("Review submitted successfully!") + except requests.exceptions.RequestException as e: + st.error(f"Failed to submit review: {e}") + diff --git a/app/src/pages/2002_My_Reviews.py b/app/src/pages/2002_My_Reviews.py new file mode 100644 index 0000000000..b1de27f46a --- /dev/null +++ b/app/src/pages/2002_My_Reviews.py @@ -0,0 +1,182 @@ +import logging +import pandas as pd +import streamlit as st +from streamlit_extras.app_logo import add_logo +import requests +import plotly.express as px +from modules.nav import SideBarLinks + +# Initialize logger +logger = logging.getLogger(__name__) + +# Call the SideBarLinks from the nav module in the modules directory +SideBarLinks() + +# Set the header of the page +st.header('My Reviews') + +# Fetch the user's reviews +try: + response = requests.get(f'http://api:4000/r/reviewsByUser/{st.session_state["id"]}') + response.raise_for_status() + reviews = response.json() +except requests.exceptions.RequestException as e: + st.error(f"Failed to fetch reviews: {e}") + reviews = [] + +# Function to fetch role and company details for each review +def fetch_role_and_company(role_id): + try: + response = requests.get(f"http://api:4000/r/roleDetails/{role_id}") + response.raise_for_status() + role_data = response.json() + return role_data.get('roleName', 'N/A'), role_data.get('companyName', 'N/A') + except requests.exceptions.RequestException as e: + logger.error(f"Failed to fetch role and company for role ID {role_id}: {e}") + return 'N/A', 'N/A' + +# Function to fetch comments for a review +def fetch_comments(review_id): + try: + response = requests.get(f"http://api:4000/r/commentsByReview/{review_id}") + response.raise_for_status() + return response.json() + except requests.exceptions.RequestException as e: + logger.error(f"Failed to fetch comments for review {review_id}: {e}") + return [] + +# Function to add a comment to a review +def add_comment(review_id, user_id, content): + try: + response = requests.post( + f"http://api:4000/r/addComment", + json={ + "reviewID": review_id, + "userID": user_id, + "content": content, + }, + ) + response.raise_for_status() + return response.json() # Assuming the API returns the created comment or success message + except requests.exceptions.RequestException as e: + logger.error(f"Failed to add comment to review {review_id}: {e}") + return None + +# If there are reviews, display them +# Display the edit form for the currently selected review +if "editing_review_id" not in st.session_state: + st.session_state["editing_review_id"] = None + +if reviews: + for review in reviews: + with st.container(): + # Fetch role and company details + role_name, company_name = fetch_role_and_company(review['roleID']) + + # Display review details + st.markdown(f"### {review['reviewType']}") # Review type in bold above the title + st.markdown(f"**Role:** {role_name} at **{company_name}**") + st.markdown(f"#### {review['heading']}") + st.markdown(f"{review['content']}") + st.markdown(f"**Views:** {review['views']} | **Likes:** {review['likes']}") + + + st.markdown(f"**Published At:** {review['publishedAt']}") + # Fetch and display comments + comments = fetch_comments(review['reviewID']) + st.markdown("#### Comments:") + if comments: + for comment in comments: + st.markdown(f"- **User {comment['userID']}**: {comment['content']} (Likes: {comment['likes']})") + else: + st.markdown("*No comments yet.*") + + + # Add a form to post a new comment + with st.form(key=f"add_comment_form_{review['reviewID']}"): + st.subheader("Add a Comment") + new_comment_content = st.text_area("Your Comment", key=f"comment_content_{review['reviewID']}") + submit_comment_button = st.form_submit_button(label="Post Comment") + + if submit_comment_button and new_comment_content: + user_id = st.session_state.get("id") # Assuming user ID is stored in session state + comment_response = add_comment(review['reviewID'], user_id, new_comment_content) + if comment_response: + st.success("Comment added successfully!") + # Optionally refresh comments + comments.append({ + "userID": user_id, + "content": new_comment_content, + "likes": 0, + }) + else: + st.error("Failed to post comment. Please try again.") + + + # Display the edit button + edit_button = st.button( + f"Edit Review {review['reviewID']}", + key=f"edit_button_{review['reviewID']}" + ) + + if edit_button: + # Set the review ID to session state for editing + st.session_state["editing_review_id"] = review["reviewID"] + + # If this review is being edited, display the edit form + if st.session_state["editing_review_id"] == review["reviewID"]: + with st.form(key=f'edit_review_form_{review["reviewID"]}'): + st.subheader("Edit this Review") + + # Pre-fill the form with the current review details + new_heading = st.text_input("Heading", value=review["heading"]) + new_content = st.text_area("Content", value=review["content"]) + + submit_button = st.form_submit_button(label="Update Review") + + # Handle form submission + if submit_button: + update_review_data = { + "reviewID": review["reviewID"], + "heading": new_heading, + "content": new_content, + } + + try: + update_response = requests.put( + f'http://api:4000/r/updateReview', + json=update_review_data + ) + update_response.raise_for_status() + st.success("Review updated successfully!") + # Reset the editing state + st.session_state["editing_review_id"] = None + except requests.exceptions.RequestException as e: + st.error(f"Failed to update review: {e}") + + # Add a cancel button to stop editing + if st.button(f"Done {review['reviewID']}", key=f"cancel_button_{review['reviewID']}"): + st.session_state["editing_review_id"] = None + + # Add a Delete button + delete_button = st.button(f"Delete Review {review['reviewID']}", key=f"delete_button_{review['reviewID']}") + + if delete_button: + # Confirm deletion before making the API request + confirm_delete = "Delete" + + if confirm_delete == "Delete": + try: + delete_response = requests.delete( + f'http://api:4000/r/deleteReview/{review["reviewID"]}' + ) + delete_response.raise_for_status() + st.success("Review deleted successfully!") + # Optionally, refresh the page or remove the deleted review from the list + reviews = [rev for rev in reviews if rev["reviewID"] != review["reviewID"]] + except requests.exceptions.RequestException as e: + st.error(f"Failed to delete review: {e}") + + st.markdown("---") +else: + st.info("No reviews found. Start by adding your first review!") diff --git a/app/src/pages/200_CoOp_Reviewer_Home.py b/app/src/pages/200_CoOp_Reviewer_Home.py new file mode 100644 index 0000000000..da866673a7 --- /dev/null +++ b/app/src/pages/200_CoOp_Reviewer_Home.py @@ -0,0 +1,31 @@ +import logging + +logger = logging.getLogger(__name__) +import streamlit as st +from modules.nav import SideBarLinks + +st.set_page_config(layout="wide") + +# Show appropriate sidebar links for the role of the currently logged in user +SideBarLinks() + +st.title(f"Welcome Co-op reviewer, {st.session_state['first_name']}.") +st.write("") +st.write("") +st.write("### What would you like to do today?") + +if st.button("Write a Review", type="primary", use_container_width=True): + st.switch_page("pages/2001_Review_Form.py") + +if st.button("View My Reviews", type="primary", use_container_width=True): + st.switch_page("pages/2002_My_Reviews.py") + +# if st.button('View Positions with Matching Skills', +# type='primary', +# use_container_width=True): +# st.switch_page('pages/1003_Skill_Matching.py') + +if st.button( + "Enter My Interview Experiences", type="primary", use_container_width=True +): + st.switch_page("pages/1004_Enter_Interview_Info.py") diff --git a/app/src/pages/20_Admin_Home.py b/app/src/pages/20_Admin_Home.py deleted file mode 100644 index 0dbd0f36b4..0000000000 --- a/app/src/pages/20_Admin_Home.py +++ /dev/null @@ -1,17 +0,0 @@ -import logging -logger = logging.getLogger(__name__) - -import streamlit as st -from modules.nav import SideBarLinks -import requests - -st.set_page_config(layout = 'wide') - -SideBarLinks() - -st.title('System Admin Home Page') - -if st.button('Update ML Models', - type='primary', - use_container_width=True): - st.switch_page('pages/21_ML_Model_Mgmt.py') \ No newline at end of file diff --git a/app/src/pages/21_ML_Model_Mgmt.py b/app/src/pages/21_ML_Model_Mgmt.py deleted file mode 100644 index 148978c24b..0000000000 --- a/app/src/pages/21_ML_Model_Mgmt.py +++ /dev/null @@ -1,28 +0,0 @@ -import logging -logger = logging.getLogger(__name__) -import streamlit as st -from modules.nav import SideBarLinks -import requests - -st.set_page_config(layout = 'wide') - -SideBarLinks() - -st.title('App Administration Page') - -st.write('\n\n') -st.write('## Model 1 Maintenance') - -st.button("Train Model 01", - type = 'primary', - use_container_width=True) - -st.button('Test Model 01', - type = 'primary', - use_container_width=True) - -if st.button('Model 1 - get predicted value for 10, 25', - type = 'primary', - use_container_width=True): - results = requests.get('http://api:4000/c/prediction/10/25').json() - st.dataframe(results) diff --git a/app/src/pages/3001_Admin_Dashboard.py b/app/src/pages/3001_Admin_Dashboard.py new file mode 100644 index 0000000000..fc1ef7b775 --- /dev/null +++ b/app/src/pages/3001_Admin_Dashboard.py @@ -0,0 +1,84 @@ +import logging +logger = logging.getLogger(__name__) +import requests +import pandas as pd +import streamlit as st +from streamlit_extras.app_logo import add_logo +import world_bank_data as wb +import matplotlib.pyplot as plt +import numpy as np +import plotly.express as px +from modules.nav import SideBarLinks + +# Call the SideBarLinks from the nav module in the modules directory +SideBarLinks() + +# set the header of the page +st.header('View Admin Dashboard') + +# You can access the session state to make a more customized/personalized app experience +st.write(f"### Hi, {st.session_state['first_name']}.") + +# get flagged reviews +try: + flaggedreviews = requests.get("http://api:4000/r/flagged").json() +except: + st.write("Could not to conncet to database to get flagged reviews") + +if "flaggedreviews" not in st.session_state: + st.session_state.flaggedreviews = pd.DataFrame(flaggedreviews) + +# get reviews +try: + reviews = requests.get("http://api:4000/r/reviews").json() +except: + st.write("Could not to conncet to database to get flagged reviews") + +if "reviews" not in st.session_state: + st.session_state.reviews = pd.DataFrame(reviews) + + +# get comments +try: + comments = requests.get("http://api:4000/r/comments").json() +except: + st.write("Could not to conncet to database to get comments") + +if "comments" not in st.session_state: + st.session_state.comments = pd.DataFrame(comments) + + +# Sample data for demonstration +user_activity_data = { + "Metric": ["Total Reviews", "Flagged Reviews", "Flagged Comments"], + "Count": [len(st.session_state.reviews), len(st.session_state.flaggedreviews), len(st.session_state.comments)], +} + + + +# Title and description +st.title("User Activity Dashboard") +st.write("An interface to manage user activity metrics, flagged reviews, and flagged comments.") + +# Display user activity metrics +st.subheader("User Activity Metrics") +metrics_df = pd.DataFrame(user_activity_data) +st.table(metrics_df) + + +# Filterable list of flagged reviews +st.subheader("Flagged Comments List") + +# Add filters +user_filter = st.selectbox("Filter by User ID", sorted(st.session_state.comments["userID"].unique())) +user_df = st.session_state.comments[st.session_state.comments["userID"]==user_filter ] +st.dataframe(user_df) +review_filter = st.selectbox("Filter by Review ID", sorted(st.session_state.comments["reviewID"].unique())) +review_df = st.session_state.comments[st.session_state.comments["reviewID"]==review_filter] +st.dataframe(review_df) + + + + + + diff --git a/app/src/pages/3002_Flagged_Posts.py b/app/src/pages/3002_Flagged_Posts.py new file mode 100644 index 0000000000..2084425544 --- /dev/null +++ b/app/src/pages/3002_Flagged_Posts.py @@ -0,0 +1,119 @@ +import logging +logger = logging.getLogger(__name__) +import pandas as pd +import streamlit as st +from streamlit_extras.app_logo import add_logo +import requests +import matplotlib.pyplot as plt +from datetime import datetime +import numpy as np +import plotly.express as px +from modules.nav import SideBarLinks + +# Call the SideBarLinks from the nav module in the modules directory +SideBarLinks() + +# set the header of the page +st.header('View and Moderate Flagged Posts') + +# You can access the session state to make a more customized/personalized app experience +st.write(f"### Hi, {st.session_state['first_name']}.") + +# get flagged posts +try: + reviews = requests.get("http://api:4000/r/flagged").json() +except: + st.write("Could not to conncet to database to get flagged reviews") + +if "reviews" not in st.session_state: + st.session_state.reviews = pd.DataFrame(reviews) + +if "History" not in st.session_state: + st.session_state.History = [] + +if "timestamp" not in st.session_state: + st.session_state.timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") + + +# Streamlit app +st.title("Flagged Posts Review Interface") + +# Display flagged posts in a table +st.subheader("Flagged Posts") +selected_post_id = st.selectbox( + "Select a review to review:", + st.session_state.reviews["reviewID"], + format_func=lambda x: f"Review {x}: {st.session_state.reviews.loc[st.session_state.reviews['reviewID'] == x, 'heading'].values[0]}" +) + + +# Show details of the selected post +selected_post = st.session_state.reviews[st.session_state.reviews["reviewID"] == selected_post_id].iloc[0] +st.write(f"**Review ID:** {selected_post['reviewID']}") +st.write(f"**Content:** {selected_post['content']}") +st.write(f"**Review Type:** {selected_post['reviewType']}") + +# Action options +st.subheader("Actions") +action = st.radio("Select an action to perform:", ["None", "Approve", "Edit", "Remove"],index = 0) +# Execute selected action + +if action == "Approve": + st.session_state.timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") + data = { + "reviewID": selected_post_id, + "isFlagged": False + } + + response = requests.put("http://api:4000/r/approveflagged", json=data) + + # reflect the response + if response.status_code == 200: + print("Success:", response.json()) + else: + print("Error:", response.status_code, response.json()) + st.success(f"Review {selected_post_id} approved.") + + + +elif action == "Edit": + new_content = st.text_area("Edit Content", selected_post['content']) + if st.button("Save Changes"): + st.session_state.timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") + st.session_state.reviews.loc[st.session_state.reviews["reviewID"] == selected_post_id, "Content"] = new_content + data = { + "reviewID": selected_post_id, + "content": new_content + } + + response = requests.put("http://api:4000/r/editflagged", json=data) + + # reflect the response + if response.status_code == 200: + print("Success:", response.json()) + else: + print("Error:", response.status_code, response.json()) + st.success(f"Post {selected_post_id} updated.") + +elif action == "Remove": + st.session_state.timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") + data = { + "reviewID": selected_post_id, + } + response = requests.delete("http://api:4000/r/removeflagged", json=data) + if response.status_code == 200: + print("Success:", response.json()) + else: + try: + error_message = response.json() + except ValueError: + error_message = "No error details provided" + print("Error:", response.status_code, error_message) + st.warning(f"Review {selected_post_id} removed.") + +st.session_state.History.append(f"- {st.session_state.timestamp} - Review {selected_post_id} - {action}") +# Display action history +st.subheader("Action History") +for entry in st.session_state.History: + st.write(entry) + diff --git a/app/src/pages/3003_Company_Profiles.py b/app/src/pages/3003_Company_Profiles.py new file mode 100644 index 0000000000..1c2aabbef7 --- /dev/null +++ b/app/src/pages/3003_Company_Profiles.py @@ -0,0 +1,173 @@ +import logging +logger = logging.getLogger(__name__) +import datetime as dt +import pandas as pd +import requests +import streamlit as st +from streamlit_extras.app_logo import add_logo +import matplotlib.pyplot as plt +import numpy as np +import plotly.express as px +from modules.nav import SideBarLinks + +# Call the SideBarLinks from the nav module in the modules directory +SideBarLinks() + +# set the header of the page +st.header('Edit and Updata Company Files') + +# You can access the session state to make a more customized/personalized app experience +st.write(f"### Hi, {st.session_state['first_name']}.") + + +# get the feedbacks +try: + companies = requests.get("http://api:4000/c/companies").json() +except: + st.write("Could not to conncet to database to get companies and roles") + +# Save DataFrame in session state +if "companies_df" not in st.session_state: + st.session_state.companies_df = pd.DataFrame(companies) + +# Title +st.title("Company and Role Manager") + +# Filter settings +st.subheader("Filter and Highlight Options") +filter_by_time = st.checkbox("Filter by Last Updated Time", value=False) +hide_location = st.checkbox("Hide Location", value=False) +highlight_missing = st.checkbox("Highlight Missing Data", value=True) + + +# Time filter input +if filter_by_time: + time_threshold = st.date_input( + "Show records updated after:", dt.date(2024, 1, 1) + ) + # Convert to datetime for comparison + time_threshold = dt.datetime.combine(time_threshold, dt.datetime.min.time()) + + +company_df = st.session_state.companies_df.copy() + + +company_df = company_df.rename(columns={'companyID':'Company ID','I.name': 'Industry', 'R.description':'Role Description', 'address':'Address', 'city':'City', 'country': 'Country', 'description':'Company Description', 'name': 'Company Name', 'roleName': 'Role Name', 'skillsRequired':'Skills Required', 'state_province':'State Province','updatedAT': 'Last Updated'}) +display_columns = ["Company ID", "Company Name", "Last Updated", "Industry", "Company Description", "Role Name", "Skills Required", "Role Description", "Address", "City", "State Province", "Country", 'Industry ID', 'Role ID','Location ID'] +company_df = company_df[display_columns] +company_df['Last Updated'] = pd.to_datetime(company_df['Last Updated']) + + +# Apply filters and highlights +if filter_by_time: + company_df = company_df[company_df["Last Updated"] > time_threshold] + +if hide_location: + company_df = company_df.drop(columns = ["Address", "City", "State Province", "Country"]) +#st.dataframe(company_df) + +if highlight_missing: + def highlight_null_rows(df): + def highlight_row(row): + # Check for NaN or empty strings in the row + if row.isnull().any() or row.eq('').any(): + return ['background-color: yellow'] * len(row) + else: + return [''] * len(row) + + # Apply the highlighting function row-wise + return df.style.apply(highlight_row, axis=1) + + styled_df = highlight_null_rows(company_df) + st.dataframe(styled_df) +else: + st.dataframe(company_df) + + +st.subheader("Edit Company Profiles") + + +company_id = st.selectbox("Select Company ID", sorted(company_df["Company ID"].unique())) +industry_id = st.selectbox("Select Industry ID", sorted(company_df["Industry ID"].unique())) +role_id = st.selectbox("Select Role ID", sorted(company_df["Role ID"].unique())) +location_id = st.selectbox("Select Location ID", sorted(company_df["Location ID"].unique())) + + + +if "flg" not in st.session_state: + st.session_state.flg = False +if "ssh" not in st.session_state: + st.session_state.ssh = False +if "company_df" not in st.session_state: + st.session_state.company_df = company_df + + +if st.button("Confirm") and not st.session_state.flg: + selected_company = company_df[company_df["Company ID"] == company_id] + filtered_company = selected_company[ + (selected_company["Industry ID"] == industry_id) & + (selected_company["Role ID"] == role_id) & + (selected_company["Location ID"] == location_id) + ] + if filtered_company.empty: + st.warning("No company meets your requirements. Please select again.") + else: + st.session_state.final_company = filtered_company + st.dataframe(filtered_company) + st.session_state.flg = True + + +if st.session_state.flg: + with st.form(key="edit_form"): + company_name = st.text_input("Company Name", st.session_state.final_company["Company Name"].iloc[0], disabled=True) + company_discription = st.text_input("Company Description", st.session_state.final_company["Company Description"].iloc[0]) + skill = st.text_input("Skills Required", st.session_state.final_company["Skills Required"].iloc[0]) + role_description = st.text_input("Role Description", st.session_state.final_company["Role Description"].iloc[0]) + + submitted = st.form_submit_button(label="Update Details") + if submitted: + condition = ( + (st.session_state.company_df["Company ID"] == company_id) & + (st.session_state.company_df["Industry ID"] == industry_id) & + (st.session_state.company_df["Role ID"] == role_id) & + (st.session_state.company_df["Location ID"] == location_id) + ) + + st.session_state.company_df.loc[(st.session_state.company_df["Company ID"] == company_id), "Company Description"] = company_discription + st.session_state.company_df.loc[(st.session_state.company_df["Company ID"] == company_id) & (st.session_state.company_df["Role ID"] == role_id), "Skills Required"] = skill + st.session_state.company_df.loc[(st.session_state.company_df["Company ID"] == company_id) & (st.session_state.company_df["Role ID"] == role_id), "Role Description"] = role_description + + + data = { + "companyID": company_id, + "RoleID": role_id, + "Company Description": company_discription, + "Skills Required": skill, + "Role Description": role_description + } + data = { + key: (int(value) if isinstance(value, np.int64) else value) + for key, value in data.items() + } + + response_company = requests.put("http://api:4000/c/companies/companies", json=data) + if response_company.status_code == 200: + print("Success:", response_company.json()) + else: + print("Error:", response_company.status_code, response_company.json()) + + response_role = requests.put("http://api:4000/c/companies/roles", json=data) + if response_role.status_code == 200: + print("Success:", response_role.json()) + else: + print("Error:", response_role.status_code, response_role.json()) + + st.session_state.ssh = True + + + +if st.session_state.ssh: + st.success("Company details updated successfully!") + st.subheader("Updated Company Data") + st.dataframe(st.session_state.company_df) + diff --git a/app/src/pages/3004_User_Feedback.py b/app/src/pages/3004_User_Feedback.py new file mode 100644 index 0000000000..ff653ff548 --- /dev/null +++ b/app/src/pages/3004_User_Feedback.py @@ -0,0 +1,124 @@ +import logging +logger = logging.getLogger(__name__) +import pandas as pd +import requests +import streamlit as st +from streamlit_extras.app_logo import add_logo +import matplotlib.pyplot as plt +import numpy as np +import plotly.express as px +from modules.nav import SideBarLinks + +# Call the SideBarLinks from the nav module in the modules directory +SideBarLinks() + +# set the header of the page +st.header('View and Categorize User Feedback') + +# You can access the session state to make a more customized/personalized app experience +st.write(f"### Hi, {st.session_state['first_name']}.") + +# get the feedbacks +try: + feedback = requests.get("http://api:4000/f/feedback").json() +except: + st.write("Could not to conncet to database to get user feedback") + +if "feedback_df" not in st.session_state: + st.session_state.feedback_df = pd.DataFrame(feedback) + +# title +st.title("User Feedback Management Interface") + +# Sort by status +st.subheader("Filter or Sort Feedback") +filter_option = st.radio( + "Select Filter or Sort Option", + ["Default", "Show Only In Progress", "Show Only Implemented", "Show Only Rejected", "Sort by FeedbackID"], + index=0, +) + +# Apply sorting or filtering +filtered_feedback_df = st.session_state.feedback_df.copy() + +if filter_option == "Show Only In Progress": + filtered_feedback_df = filtered_feedback_df[ + filtered_feedback_df["status"] == "In Progress" + ] +elif filter_option == "Show Only Implemented": + filtered_feedback_df = filtered_feedback_df[ + filtered_feedback_df["status"] == "Implemented" + ] +elif filter_option == "Show Only Rejected": + filtered_feedback_df = filtered_feedback_df[ + filtered_feedback_df["status"] == "Rejected" + ] +elif filter_option == "Sort by FeedbackID": + filtered_feedback_df = filtered_feedback_df.sort_values('feedbackID') + +else: + filtered_feedback_df["status"] = pd.Categorical( + filtered_feedback_df["status"], + categories=["In Progress", "Implemented", "Rejected"], + ordered=True, + ) + filtered_feedback_df = filtered_feedback_df.sort_values("status") + +# Select and rearrange columns +display_columns = ["status", "feedbackID", "userID", "timestamp", "header", "content"] +filtered_feedback_df = filtered_feedback_df[display_columns] + +filtered_feedback_df = filtered_feedback_df.reset_index(drop=True) + +# Display feedback table without index +st.subheader("Feedback List") +st.dataframe(filtered_feedback_df) + + +# Feedback update form +st.subheader("Update Feedback Status") +input_id = st.text_input("Enter Feedback ID") +new_status = st.radio( + "Select New Status", + ["In Progress", "Implemented", "Rejected"], + index=0, +) + + +if st.button("Update Status"): + # Validate ID + if input_id.isdigit() and int(input_id) in st.session_state.feedback_df["feedbackID"].values: + selected_id = int(input_id) + + # Update the status of the selected feedback in the current dataframe + st.session_state.feedback_df.loc[ + st.session_state.feedback_df["feedbackID"] == selected_id, "status" + ] = new_status + st.success(f"Feedback ID {selected_id} status updated to '{new_status}'!") + + filtered_feedback_df.loc[filtered_feedback_df["feedbackID"] == int(input_id), "status"] = new_status + # Update the status of the selected feedback in the SQL database + + # data which is used for update + data = { + "feedbackID": selected_id, + "status": new_status + } + + response = requests.put("http://api:4000/f/feedback", json=data) + + # reflect the response + if response.status_code == 200: + print("Success:", response.json()) + else: + print("Error:", response.status_code, response.json()) + + else: + st.error("Please enter a valid Feedback ID.") + +# Display updated feedback table +st.subheader("Updated Feedback List") + +st.dataframe(filtered_feedback_df) + + diff --git a/app/src/pages/3005_Analytics_and_Trends.py b/app/src/pages/3005_Analytics_and_Trends.py new file mode 100644 index 0000000000..411e9b5a7f --- /dev/null +++ b/app/src/pages/3005_Analytics_and_Trends.py @@ -0,0 +1,75 @@ +import logging +logger = logging.getLogger(__name__) +import requests +import pandas as pd +import streamlit as st +from streamlit_extras.app_logo import add_logo +import matplotlib.pyplot as plt + +# Safe initialization of session state variables +if "first_name" not in st.session_state: + st.session_state.first_name = "Guest" + +# Call the SideBarLinks from the nav module +from modules.nav import SideBarLinks +SideBarLinks() + +# Set the header of the page +st.header('View Analytics and Trends') +st.write(f"### Hi, {st.session_state['first_name']}.") + +# Fetch industries +industries = [] +try: + industries = requests.get("http://api:4000/c/industries").json() +except Exception as e: + st.write(f"Error: {e}") + st.write("Could not connect to database to get industries.") +if "industries" not in st.session_state: + st.session_state.industries = pd.DataFrame(industries) + +# Fetch reviews +reviews = [] +try: + reviews = requests.get("http://api:4000/c/reviews").json() +except Exception as e: + st.write(f"Error: {e}") + st.write("Could not connect to database to get reviews.") +if "reviews" not in st.session_state: + st.session_state.reviews = pd.DataFrame(reviews) + +# Title +st.title("Trends in Student Reviews and Companies") +st.subheader("Trend Data") + +# Plot trends if data is valid +if not st.session_state.industries.empty and not st.session_state.reviews.empty: + fig, ax = plt.subplots(figsize=(10, 6)) + ax.plot(st.session_state.reviews["Industry"], st.session_state.reviews['NumReviews'], label="reviews", marker="o") + ax.plot(st.session_state.industries["Industry"], st.session_state.industries['NumCompany'], label="Companies", marker="o") + ax.set_ylabel("Count") + ax.set_title("Trends by Category") + ax.set_xticklabels(st.session_state.industries["Industry"], rotation=45, ha="right") + ax.legend() + ax.grid(True) + st.pyplot(fig) +else: + st.write("Insufficient data to display trends.") + +# Actionable Insights +st.subheader("Actionable Insights") +if not st.session_state.industries.empty: + most_companies = st.session_state.industries.sort_values("NumCompany").iloc[-1] + st.write(f"**Industry with Most Companies**: {most_companies['Industry']} ({most_companies['NumCompany']} companies)") + +if not st.session_state.reviews.empty: + most_reviews = st.session_state.reviews.sort_values("NumReviews").iloc[-1] + st.write(f"**Industry with Most Reviews**: {most_reviews['Industry']} ({most_reviews['NumReviews']} reviews)") + +# Display Data Table +if "Industry" in st.session_state.industries and "Industry" in st.session_state.reviews: + df = pd.merge(st.session_state.industries, st.session_state.reviews, on="Industry") + st.subheader("Data Table") + st.dataframe(df) +else: + st.write("Data table cannot be displayed due to missing columns.") diff --git a/app/src/pages/300_System_Administrator_Home.py b/app/src/pages/300_System_Administrator_Home.py new file mode 100644 index 0000000000..b3c9d582b1 --- /dev/null +++ b/app/src/pages/300_System_Administrator_Home.py @@ -0,0 +1,40 @@ +import logging +logger = logging.getLogger(__name__) +import streamlit as st +from modules.nav import SideBarLinks + +st.set_page_config(layout = 'wide') + +# Show appropriate sidebar links for the role of the currently logged in user +SideBarLinks() + +st.title(f"Welcome System Administrator, {st.session_state['first_name']}.") +st.write('') +st.write('') +st.write('### What would you like to do today?') + +if st.button('View Admin Dashboard', + type='primary', + use_container_width=True): + st.switch_page('pages/3001_Admin_Dashboard.py') + +if st.button('View and Moderate Flagged Posts', + type='primary', + use_container_width=True): + st.switch_page('pages/3002_Flagged_Posts.py') + +if st.button('Edit and Update Company Profiles', + type='primary', + use_container_width=True): + st.switch_page('pages/3003_Company_Profiles.py') + +if st.button('View and Categorize User Feedback', + type='primary', + use_container_width=True): + st.switch_page('pages/3004_User_Feedback.py') + +if st.button('View Analytics and Trends', + type='primary', + use_container_width=True): + st.switch_page('pages/3005_Analytics_and_Trends.py') + diff --git a/app/src/pages/30_About.py b/app/src/pages/30_About.py index 07a2e9aab2..462a53b9c8 100644 --- a/app/src/pages/30_About.py +++ b/app/src/pages/30_About.py @@ -2,17 +2,17 @@ from streamlit_extras.app_logo import add_logo from modules.nav import SideBarLinks + SideBarLinks() +st.sidebar.page_link("Home.py", label="Home", icon="🏠") + st.write("# About this App") + st.markdown ( """ - This is a demo app for CS 3200 Course Project. - - The goal of this demo is to provide information on the tech stack - being used as well as demo some of the features of the various platforms. + COUPE is a data-driven platform designed to revolutionize the Co-Op search process for Northeastern students by providing peer-to-peer insights into specific company experiences. While platforms like NUWorks provide job listings, they often fall short in detailing the real day-to-day life, culture, and interview dynamics students will encounter. COUPE fills this gap by giving students a space to share and access honest feedback about what it’s truly like working for various employers—covering everything from workplace culture to interview formats and application tips. - Stay tuned for more information and features to come! """ ) diff --git a/app/src/pages/4001_Analytics_Page.py b/app/src/pages/4001_Analytics_Page.py new file mode 100644 index 0000000000..189694e037 --- /dev/null +++ b/app/src/pages/4001_Analytics_Page.py @@ -0,0 +1,91 @@ +import logging + +logger = logging.getLogger(__name__) +import requests +import pandas as pd +import streamlit as st +from streamlit_extras.app_logo import add_logo +import matplotlib.pyplot as plt + +# Safe initialization of session state variables +if "first_name" not in st.session_state: + st.session_state.first_name = "Guest" + +# Call the SideBarLinks from the nav module +from modules.nav import SideBarLinks + +SideBarLinks() + +# Set the header of the page +st.header("View Analytics and Trends") +st.write(f"### Hi, {st.session_state['first_name']}.") + +# Fetch industries +industries = [] +try: + industries = requests.get("http://api:4000/c/industries").json() +except Exception as e: + st.write(f"Error: {e}") + st.write("Could not connect to database to get industries.") +if "industries" not in st.session_state: + st.session_state.industries = pd.DataFrame(industries) + +# Fetch reviews +reviews = [] +try: + reviews = requests.get("http://api:4000/c/reviews").json() +except Exception as e: + st.write(f"Error: {e}") + st.write("Could not connect to database to get reviews.") +if "reviews" not in st.session_state: + st.session_state.reviews = pd.DataFrame(reviews) + +# Title +st.title("Trends in Student Reviews and Companies") +st.subheader("Trend Data") + +# Plot trends if data is valid +if not st.session_state.industries.empty and not st.session_state.reviews.empty: + fig, ax = plt.subplots(figsize=(10, 6)) + ax.plot( + st.session_state.reviews["Industry"], + st.session_state.reviews["NumReviews"], + label="reviews", + marker="o", + ) + ax.plot( + st.session_state.industries["Industry"], + st.session_state.industries["NumCompany"], + label="Companies", + marker="o", + ) + ax.set_ylabel("Count") + ax.set_title("Trends by Category") + ax.set_xticklabels(st.session_state.industries["Industry"], rotation=45, ha="right") + ax.legend() + ax.grid(True) + st.pyplot(fig) +else: + st.write("Insufficient data to display trends.") + +# Actionable Insights +st.subheader("Actionable Insights") +if not st.session_state.industries.empty: + most_companies = st.session_state.industries.sort_values("NumCompany").iloc[-1] + st.write( + f"**Industry with Most Companies**: {most_companies['Industry']} ({most_companies['NumCompany']} companies)" + ) + +if not st.session_state.reviews.empty: + most_reviews = st.session_state.reviews.sort_values("NumReviews").iloc[-1] + st.write( + f"**Industry with Most Reviews**: {most_reviews['Industry']} ({most_reviews['NumReviews']} reviews)" + ) + +# Display Data Table +if "Industry" in st.session_state.industries and "Industry" in st.session_state.reviews: + df = pd.merge(st.session_state.industries, st.session_state.reviews, on="Industry") + st.subheader("Data Table") + st.dataframe(df) +else: + st.write("Data table cannot be displayed due to missing columns.") diff --git a/app/src/pages/4002_Feedback_Data.py b/app/src/pages/4002_Feedback_Data.py new file mode 100644 index 0000000000..a893441b19 --- /dev/null +++ b/app/src/pages/4002_Feedback_Data.py @@ -0,0 +1,75 @@ +import logging +logger = logging.getLogger(__name__) +import pandas as pd +import requests +import streamlit as st +from streamlit_extras.app_logo import add_logo +import matplotlib.pyplot as plt +import numpy as np +import plotly.express as px +from modules.nav import SideBarLinks + +# Call the SideBarLinks from the nav module in the modules directory +SideBarLinks() + +# set the header of the page +st.header('View and Categorize User Feedback') + +# You can access the session state to make a more customized/personalized app experience +st.write(f"### Hi, {st.session_state['first_name']}.") + +# get the feedbacks +try: + feedback = requests.get("http://api:4000/f/feedback").json() +except: + st.write("Could not to conncet to database to get user feedback") + +if "feedback_df" not in st.session_state: + st.session_state.feedback_df = pd.DataFrame(feedback) + +# title +st.title("User Feedback Management Interface") + +# Sort by status +st.subheader("Filter or Sort Feedback") +filter_option = st.radio( + "Select Filter or Sort Option", + ["Default", "Show Only In Progress", "Show Only Implemented", "Show Only Rejected", "Sort by FeedbackID"], + index=0, +) + +# Apply sorting or filtering +filtered_feedback_df = st.session_state.feedback_df.copy() + +if filter_option == "Show Only In Progress": + filtered_feedback_df = filtered_feedback_df[ + filtered_feedback_df["status"] == "In Progress" + ] +elif filter_option == "Show Only Implemented": + filtered_feedback_df = filtered_feedback_df[ + filtered_feedback_df["status"] == "Implemented" + ] +elif filter_option == "Show Only Rejected": + filtered_feedback_df = filtered_feedback_df[ + filtered_feedback_df["status"] == "Rejected" + ] +elif filter_option == "Sort by FeedbackID": + filtered_feedback_df = filtered_feedback_df.sort_values('feedbackID') + +else: + filtered_feedback_df["status"] = pd.Categorical( + filtered_feedback_df["status"], + categories=["In Progress", "Implemented", "Rejected"], + ordered=True, + ) + filtered_feedback_df = filtered_feedback_df.sort_values("status") + +# Select and rearrange columns +display_columns = ["status", "feedbackID", "userID", "timestamp", "header", "content"] +filtered_feedback_df = filtered_feedback_df[display_columns] + +filtered_feedback_df = filtered_feedback_df.reset_index(drop=True) + +# Display feedback table without index +st.subheader("Feedback List") +st.dataframe(filtered_feedback_df) diff --git a/app/src/pages/00_Pol_Strat_Home.py b/app/src/pages/400_Analyst_Home.py similarity index 56% rename from app/src/pages/00_Pol_Strat_Home.py rename to app/src/pages/400_Analyst_Home.py index 3d02f25552..b976976914 100644 --- a/app/src/pages/00_Pol_Strat_Home.py +++ b/app/src/pages/400_Analyst_Home.py @@ -1,6 +1,5 @@ import logging logger = logging.getLogger(__name__) - import streamlit as st from modules.nav import SideBarLinks @@ -9,17 +8,19 @@ # Show appropriate sidebar links for the role of the currently logged in user SideBarLinks() -st.title(f"Welcome Political Strategist, {st.session_state['first_name']}.") +st.title(f"Welcome Analyst, {st.session_state['first_name']}.") st.write('') st.write('') st.write('### What would you like to do today?') -if st.button('View World Bank Data Visualization', - type='primary', + +if st.button('View Analytics and Trends', + type='primary', use_container_width=True): - st.switch_page('pages/01_World_Bank_Viz.py') + st.switch_page('pages/4001_Analytics_Page.py') -if st.button('View World Map Demo', - type='primary', +if st.button('View Feedback Data', + type='primary', use_container_width=True): - st.switch_page('pages/02_Map_Demo.py') \ No newline at end of file + st.switch_page('pages/4002_Feedback_Data.py') + diff --git a/database-files/10_coopPlatformDB.sql b/database-files/10_coopPlatformDB.sql new file mode 100644 index 0000000000..0ecf2b1405 --- /dev/null +++ b/database-files/10_coopPlatformDB.sql @@ -0,0 +1,142 @@ +-- Create the database +CREATE DATABASE IF NOT EXISTS CoopPlatform; + + +-- Use the newly created database +USE CoopPlatform; + + +-- Create User table +CREATE TABLE IF NOT EXISTS User ( + userID INT AUTO_INCREMENT PRIMARY KEY, + firstName VARCHAR(50), + lastName VARCHAR(50), + email VARCHAR(100) UNIQUE, + major VARCHAR(50), + skills TEXT, + interests TEXT, + isAdmin BOOLEAN DEFAULT FALSE, + isAnalyst BOOLEAN DEFAULT FALSE +); + + +-- Create Companies table +CREATE TABLE IF NOT EXISTS Companies ( + companyID INT AUTO_INCREMENT PRIMARY KEY, + name VARCHAR(100), + description TEXT, + createdAt DATETIME, + updatedAT DATETIME +); + + +-- Create Industries table +CREATE TABLE IF NOT EXISTS Industries ( + industryID INT AUTO_INCREMENT PRIMARY KEY, + name VARCHAR(100) +); + + +-- Create the relationship table for Industries and Companies +CREATE TABLE IF NOT EXISTS CompanyIndustry ( + companyID INT, + industryID INT, + PRIMARY KEY (companyID, industryID), + FOREIGN KEY (companyID) REFERENCES Companies(companyID), + FOREIGN KEY (industryID) REFERENCES Industries(industryID) +); + + +-- Create Location table +CREATE TABLE IF NOT EXISTS Location ( + locationID INT AUTO_INCREMENT PRIMARY KEY, + companyID INT, + address TEXT, + city VARCHAR(100), + state_province VARCHAR(100), + country VARCHAR(100), + FOREIGN KEY (companyID) REFERENCES Companies(companyID) +); + + +-- Create Role table +CREATE TABLE IF NOT EXISTS Role ( + roleID INT AUTO_INCREMENT PRIMARY KEY, + companyID INT, + locationID INT, + roleName VARCHAR(100), + description TEXT, + skillsRequired TEXT, + FOREIGN KEY (companyID) REFERENCES Companies(companyID), + FOREIGN KEY (locationID) REFERENCES Location(locationID) +); + + +-- Create Reviews table +CREATE TABLE IF NOT EXISTS Reviews ( + reviewID INT AUTO_INCREMENT PRIMARY KEY, + userID INT, + roleID INT, + createdAt DATETIME DEFAULT CURRENT_TIMESTAMP, + updatedAt DATETIME ON UPDATE CURRENT_TIMESTAMP, + publishedAt DATETIME, + reviewType VARCHAR(50), + heading VARCHAR(100), + content TEXT, + views INT DEFAULT 0, + likes INT DEFAULT 0, + isFlagged BOOLEAN DEFAULT FALSE, + FOREIGN KEY (userID) REFERENCES User(userID) ON DELETE CASCADE, + FOREIGN KEY (roleID) REFERENCES Role(roleID) ON DELETE CASCADE +); + +-- Create Comments table +CREATE TABLE IF NOT EXISTS Comments ( + commentID INT AUTO_INCREMENT PRIMARY KEY, + reviewID INT, + userID INT, + parentCommentID INT DEFAULT NULL, + createdAt DATETIME DEFAULT CURRENT_TIMESTAMP, + updatedAt DATETIME ON UPDATE CURRENT_TIMESTAMP, + content TEXT, + likes INT DEFAULT 0, + isFlagged BOOLEAN DEFAULT FALSE, + FOREIGN KEY (reviewID) REFERENCES Reviews(reviewID) ON DELETE CASCADE, + FOREIGN KEY (userID) REFERENCES User(userID), + FOREIGN KEY (parentCommentID) REFERENCES Comments(commentID) ON DELETE CASCADE +); + + + +-- Create Badges table +CREATE TABLE IF NOT EXISTS Badges ( + badgeID INT AUTO_INCREMENT PRIMARY KEY, + badgeName VARCHAR(50) +); + + +-- Create the relationship table for User and Badges +CREATE TABLE IF NOT EXISTS UserBadges ( + userID INT, + badgeID INT, + PRIMARY KEY (userID, badgeID), + FOREIGN KEY (userID) REFERENCES User(userID), + FOREIGN KEY (badgeID) REFERENCES Badges(badgeID) +); + + +-- Create Feedback table +CREATE TABLE IF NOT EXISTS Feedback ( + feedbackID INT AUTO_INCREMENT PRIMARY KEY, + userID INT, + timestamp DATETIME DEFAULT CURRENT_TIMESTAMP, + header VARCHAR(100), + content TEXT, + status VARCHAR(100) DEFAULT 'In Progress', + FOREIGN KEY (userID) REFERENCES User(userID) +); + + + + + diff --git a/database-files/12_CoopPlatform_Data.sql b/database-files/12_CoopPlatform_Data.sql new file mode 100644 index 0000000000..16eebc7b6c --- /dev/null +++ b/database-files/12_CoopPlatform_Data.sql @@ -0,0 +1,418 @@ +-- Use the newly created database +USE CoopPlatform; + + +INSERT INTO User (firstName, lastName, email, major, skills, interests, isAdmin, isAnalyst) VALUES +('Alice', 'Smith', 'alice.smith@example.com', 'Computer Science', 'Java, Python', 'AI, Robotics', FALSE, TRUE), +('Bob', 'Johnson', 'bob.johnson@example.com', 'Mathematics', 'R, MATLAB', 'Statistics, Data Science', FALSE, TRUE), +('Charlie', 'Brown', 'charlie.brown@example.com', 'Physics', 'C++, Fortran', 'Quantum Mechanics, Astrophysics', FALSE, FALSE), +('Diana', 'Prince', 'diana.prince@example.com', 'Electrical Engineering', 'C, Embedded Systems', 'Renewable Energy, IoT', FALSE, FALSE), +('Ethan', 'Hunt', 'ethan.hunt@example.com', 'Cybersecurity', 'Networking, Ethical Hacking', 'Digital Security, Cryptography', FALSE, TRUE), +('Fiona', 'Gallagher', 'fiona.gallagher@example.com', 'Psychology', 'SPSS, Data Analysis', 'Behavioral Studies, Neuropsychology', FALSE, FALSE), +('George', 'Michaels', 'george.michaels@example.com', 'Business', 'Excel, SQL', 'Entrepreneurship, Management', FALSE, FALSE), +('Hannah', 'Montana', 'hannah.montana@example.com', 'Music', 'Composition, Audio Editing', 'Songwriting, Producing', FALSE, FALSE), +('Ian', 'Malcolm', 'ian.malcolm@example.com', 'Biology', 'Genetics, Bioinformatics', 'Evolution, Conservation', FALSE, TRUE), +('Julia', 'Roberts', 'julia.roberts@example.com', 'Drama', 'Acting, Directing', 'Theater, Film Studies', FALSE, FALSE), +('Kevin', 'Hart', 'kevin.hart@example.com', 'Performing Arts', 'Comedy, Storytelling', 'Improv, Stand-up', FALSE, FALSE), +('Laura', 'Croft', 'laura.croft@example.com', 'Archaeology', 'Field Research, Mapping', 'Ancient Civilizations, Exploration', FALSE, TRUE), +('Michael', 'Scott', 'michael.scott@example.com', 'Business Administration', 'Leadership, Communication', 'Sales, Marketing', TRUE, FALSE), +('Nancy', 'Drew', 'nancy.drew@example.com', 'Criminology', 'Investigation, Profiling', 'Mysteries, Forensics', FALSE, FALSE), +('Oliver', 'Queen', 'oliver.queen@example.com', 'Political Science', 'Policy Analysis, Strategy', 'Social Justice, Governance', FALSE, FALSE), +('Penelope', 'Garcia', 'penelope.garcia@example.com', 'Information Systems', 'Database Management, Programming', 'Technology, Cybersecurity', FALSE, TRUE), +('Quincy', 'Jones', 'quincy.jones@example.com', 'Music', 'Arranging, Producing', 'Jazz, Classical', FALSE, FALSE), +('Rachel', 'Green', 'rachel.green@example.com', 'Fashion Design', 'Sketching, Sewing', 'Trends, Styling', FALSE, FALSE), +('Steve', 'Jobs', 'steve.jobs@example.com', 'Product Design', 'Innovation, Prototyping', 'Technology, Design Thinking', FALSE, TRUE), +('Tracy', 'Morgan', 'tracy.morgan@example.com', 'Comedy Writing', 'Humor, Screenwriting', 'Sitcoms, Stand-up', FALSE, FALSE), +('Uma', 'Thurman', 'uma.thurman@example.com', 'Film Studies', 'Directing, Editing', 'Cinema, Storytelling', FALSE, FALSE), +('Victor', 'Frankenstein', 'victor.frankenstein@example.com', 'Biotechnology', 'CRISPR, Synthetic Biology', 'Genetic Engineering, Bioethics', FALSE, TRUE), +('Will', 'Smith', 'will.smith@example.com', 'Acting', 'Acting, Producing', 'Film, Music', FALSE, FALSE), +('Xander', 'Harris', 'xander.harris@example.com', 'History', 'Archival Research, Writing', 'Medieval History, Literature', FALSE, FALSE), +('Yara', 'Greyjoy', 'yara.greyjoy@example.com', 'Marine Biology', 'Oceanography, Diving', 'Marine Conservation, Ecology', FALSE, TRUE), +('Zara', 'Larsson', 'zara.larsson@example.com', 'Music Production', 'Singing, Mixing', 'Pop Music, Performance', FALSE, FALSE), +('Adam', 'West', 'adam.west@example.com', 'Theater', 'Acting, Public Speaking', 'Drama, Performance', FALSE, FALSE), +('Bella', 'Swan', 'bella.swan@example.com', 'Literature', 'Creative Writing, Editing', 'Romance, Fiction', FALSE, FALSE), +('Chris', 'Evans', 'chris.evans@example.com', 'Film Production', 'Editing, Cinematography', 'Action, Direction', FALSE, TRUE), +('Derek', 'Shepherd', 'derek.shepherd@example.com', 'Medicine', 'Surgery, Research', 'Neurosurgery, Health Care', FALSE, FALSE), +('Evelyn', 'Salt', 'evelyn.salt@example.com', 'International Relations', 'Negotiation, Strategy', 'Conflict Resolution, Diplomacy', FALSE, FALSE), +('Finn', 'Hudson', 'finn.hudson@example.com', 'Music Education', 'Choir, Performance', 'Teaching, Vocal Training', FALSE, FALSE), +('Grace', 'Hopper', 'grace.hopper@example.com', 'Computer Science', 'Programming, Algorithms', 'Innovation, Women in Tech', TRUE, TRUE), +('Harvey', 'Specter', 'harvey.specter@example.com', 'Law', 'Litigation, Negotiation', 'Corporate Law, Strategy', FALSE, FALSE); + +INSERT INTO Companies (name, description, createdAt, updatedAt) VALUES +('TechNova Inc.', 'A leading company in AI and machine learning solutions.', '2020-05-14 12:34:56', '2022-11-01 15:45:30'), +('GreenFields Ltd.', 'Specializing in sustainable agriculture and organic farming techniques.', '2021-02-20 09:22:10', '2023-05-14 10:40:20'), +('SkyHigh Aerospace', 'An innovator in aerospace engineering and satellite technology.', '2020-09-10 14:18:20', '2023-01-27 19:34:50'), +('EcoEnergy Co.', NULL, '2020-06-01 11:45:10', '2021-12-19 16:22:10'), +#('EcoEnergy Co.', 'Focused on renewable energy solutions, including solar and wind power.', '2020-06-01 11:45:10', '2021-12-19 16:22:10'), +('BrightFuture Education', 'Providing online and in-person educational programs worldwide.', '2021-07-15 08:30:00', '2024-04-12 18:11:45'), +('HealthCore Pharmaceuticals', 'Researching and developing cutting-edge medications.', '2020-11-05 10:20:30', '2022-08-21 09:50:40'), +('UrbanBuilders LLC', 'An urban construction and architecture firm creating smart cities.', '2022-03-18 15:12:00', '2023-10-25 20:05:50'), +('AquaLife Systems', 'Revolutionizing water purification and desalination technologies.', '2021-09-12 13:25:30', '2024-01-10 14:40:30'), +('GlobalTech Solutions', 'Delivering IT consulting and software development services.', '2021-05-25 10:50:50', '2023-07-15 17:05:20'), +('Stellar Entertainment', 'Producing films, music, and streaming content for a global audience.', '2020-12-19 14:15:45', '2022-06-30 18:25:40'), +('NextGen Robotics', 'Designing robots for industrial, medical, and household applications.', '2020-04-10 11:10:10', '2024-03-05 10:55:00'), +('FinPro Banking', 'A financial services firm offering innovative banking solutions.', '2021-08-23 16:45:25', '2022-11-18 12:35:50'), +('AutoFuture Ltd.', 'Developing electric and autonomous vehicles.', '2020-01-05 09:30:15', '2023-09-29 15:45:25'), +('BioGenomics Inc.', 'Advancing genetic research and biotechnology applications.', '2022-02-11 18:00:00', '2023-06-12 11:22:33'), +('BlueOcean Logistics', 'A global logistics provider specializing in ocean freight.', '2021-03-17 14:50:35', '2024-02-01 12:00:45'), +('CyberShield Security', 'Protecting businesses with advanced cybersecurity solutions.', '2020-08-21 10:05:45', '2023-03-30 14:45:50'), +('Peak Performance Sports', 'Manufacturing high-quality sports equipment and apparel.', '2020-10-13 09:15:00', '2022-10-05 16:50:40'), +('EcoHomes Construction', 'Building eco-friendly and energy-efficient homes.', '2021-06-30 14:45:20', '2023-12-02 20:10:10'), +('VirtuMed Technologies', 'Developing virtual reality solutions for medical training.', '2020-03-25 13:40:10', '2022-09-21 15:45:45'), +('Gourmet World Foods', 'An international distributor of gourmet and specialty foods.', '2020-07-14 11:30:00', '2024-01-22 10:20:50'), +('Visionary Designs', 'Offering innovative and stylish interior design services.', '2021-01-10 12:15:25', '2023-05-01 09:40:30'), +('PetCare Innovations', 'Creating cutting-edge products for pet health and wellness.', '2021-04-07 14:22:30', '2023-10-15 13:45:50'), +('TravelSphere Ltd.', 'Providing unique travel experiences and adventure tours.', '2020-02-19 08:45:15', '2023-07-11 17:30:20'), +('FusionTech Manufacturing', 'Specializing in advanced manufacturing and automation.', '2020-11-11 12:50:45', '2024-05-20 19:50:40'), +('DataPulse Analytics', 'Helping businesses harness big data for decision-making.', '2022-01-18 10:25:00', '2023-08-30 14:55:30'), +('GreenTech Farms', 'Implementing innovative vertical farming solutions.', '2021-11-21 15:35:20', '2024-04-14 09:45:00'), +('CloudNet Systems', 'Offering cloud computing and data storage services.', '2021-10-09 17:15:45', '2023-09-03 11:05:10'), +('ArtisanCrafts Co.', 'Supporting local artisans through handcrafted product sales.', '2020-06-08 09:45:10', '2022-05-25 18:55:40'), +('BrightPath Logistics', 'Providing last-mile delivery solutions for e-commerce.', '2021-07-05 13:50:50', '2024-03-19 15:20:30'), +('QuantumLeap Solutions', 'Researching quantum computing and its applications.', '2020-09-15 14:10:45', '2023-06-25 12:45:50'); + + + + +INSERT INTO Industries (name) VALUES +('Information Technology'), +('Healthcare'), +('Education'), +('Finance'), +('Automotive'), +('Biotechnology'), +('Aerospace'), +('Renewable Energy'), +('Construction'), +('Entertainment'), +('Robotics'), +('Logistics'), +('Cybersecurity'), +('Sports and Recreation'), +('Real Estate'), +('Food and Beverage'), +('Interior Design'), +('Pet Care'), +('Travel and Tourism'), +('Manufacturing'), +('Data Analytics'), +('Agriculture'), +('Cloud Computing'), +('Arts and Crafts'), +('E-commerce'), +('Quantum Computing'), +('Fashion and Apparel'), +('Gaming'), +('Pharmaceuticals'), +('Marine and Aquatic Technologies'); + + +INSERT INTO CompanyIndustry (companyID, industryID) VALUES +(1, 1), -- TechNova Inc. -> Information Technology +(1, 13), -- TechNova Inc. -> Cybersecurity +(2, 22), -- GreenFields Ltd. -> Agriculture +(2, 8), -- GreenFields Ltd. -> Renewable Energy +(3, 7), -- SkyHigh Aerospace -> Aerospace +(4, 8), -- EcoEnergy Co. -> Renewable Energy +(4, 21), -- EcoEnergy Co. -> Manufacturing +(5, 3), -- BrightFuture Education -> Education +(6, 2), -- HealthCore Pharmaceuticals -> Healthcare +(6, 30), -- HealthCore Pharmaceuticals -> Pharmaceuticals +(7, 9), -- UrbanBuilders LLC -> Construction +(8, 30), -- AquaLife Systems -> Marine and Aquatic Technologies +(9, 1), -- GlobalTech Solutions -> Information Technology +(10, 10), -- Stellar Entertainment -> Entertainment +(11, 12), -- NextGen Robotics -> Robotics +(11, 21), -- NextGen Robotics -> Manufacturing +(12, 4), -- FinPro Banking -> Finance +(13, 5), -- AutoFuture Ltd. -> Automotive +(13, 21), -- AutoFuture Ltd. -> Manufacturing +(14, 6), -- BioGenomics Inc. -> Biotechnology +(14, 30), -- BioGenomics Inc. -> Pharmaceuticals +(15, 12), -- BlueOcean Logistics -> Logistics +(16, 13), -- CyberShield Security -> Cybersecurity +(17, 14), -- Peak Performance Sports -> Sports and Recreation +(18, 9), -- EcoHomes Construction -> Construction +(18, 8), -- EcoHomes Construction -> Renewable Energy +(19, 1), -- VirtuMed Technologies -> Information Technology +(19, 3), -- VirtuMed Technologies -> Education +(20, 17), -- Gourmet World Foods -> Food and Beverage +(21, 10), -- Visionary Designs -> Interior Design +(22, 18), -- PetCare Innovations -> Pet Care +(23, 19), -- TravelSphere Ltd. -> Travel and Tourism +(24, 21), -- FusionTech Manufacturing -> Manufacturing +(25, 20), -- DataPulse Analytics -> Data Analytics +(26, 22), -- GreenTech Farms -> Agriculture +(27, 23), -- CloudNet Systems -> Cloud Computing +(28, 24), -- ArtisanCrafts Co. -> Arts and Crafts +(29, 25), -- BrightPath Logistics -> E-commerce +(30, 26); -- QuantumLeap Solutions -> Quantum Computing + +INSERT INTO Location (companyID, address, city, state_province, country) VALUES +(1, '123 Innovation Blvd', 'San Francisco', 'California', 'USA'), +(2, '456 Greenway Rd', 'Seattle', 'Washington', 'USA'), +(3, '789 SkyHigh Dr', 'Huntsville', 'Alabama', 'USA'), +(4, '321 Solar St', 'Austin', 'Texas', 'USA'), +(5, '654 Bright Ln', 'Boston', 'Massachusetts', 'USA'), +(6, '987 Pharma Ave', 'Cambridge', 'Massachusetts', 'USA'), +(7, '111 Builder Way', 'New York', 'New York', 'USA'), +(8, '222 Water Works', 'Miami', 'Florida', 'USA'), +(9, '333 Tech Plaza', 'San Jose', 'California', 'USA'), +(10, '444 Entertainment Row', 'Los Angeles', 'California', 'USA'), +(11, '555 Robotics St', 'Pittsburgh', 'Pennsylvania', 'USA'), +(12, '666 Finance Blvd', 'Chicago', 'Illinois', 'USA'), +(13, '777 Auto Ln', 'Detroit', 'Michigan', 'USA'), +(14, '888 BioTech Rd', 'San Diego', 'California', 'USA'), +(15, '999 Ocean Dr', 'Savannah', 'Georgia', 'USA'), +(16, '121 Cyber Ave', 'Austin', 'Texas', 'USA'), +(17, '131 Sports Way', 'Portland', 'Oregon', 'USA'), +(18, '141 Eco Homes Blvd', 'Denver', 'Colorado', 'USA'), +(19, '151 MedTech Dr', 'Houston', 'Texas', 'USA'), +(20, '161 Gourmet St', 'Paris', 'Île-de-France', 'France'), +(21, '171 Design Ln', 'Milan', 'Lombardy', 'Italy'), +(22, '181 PetCare Way', 'Sydney', 'New South Wales', 'Australia'), +(23, '191 Travel Blvd', 'London', 'England', 'United Kingdom'), +(24, '201 FusionTech Rd', 'Munich', 'Bavaria', 'Germany'), +(25, '211 DataPulse Ave', 'Toronto', 'Ontario', 'Canada'), +(26, '221 Green Farms Rd', 'Amsterdam', 'North Holland', 'Netherlands'), +(27, '231 CloudNet Dr', 'Dublin', 'Leinster', 'Ireland'), +(28, '241 Artisan Row', 'Kyoto', 'Kyoto Prefecture', 'Japan'), +(29, '251 BrightPath Blvd', 'Shanghai', 'Shanghai', 'China'), +(30, '261 Quantum Leap Way', 'Zurich', 'Zurich', 'Switzerland'), +(1, '271 Silicon Way', 'Palo Alto', 'California', 'USA'), +(12, '282 Green Circle', 'Vancouver', 'British Columbia', 'Canada'), +(23, '293 Spaceport Dr', 'Cape Canaveral', 'Florida', 'USA'), +(24, '304 Renewable St', 'Berlin', 'Berlin', 'Germany'), +(5, '315 BrightStar Rd', 'Oslo', 'Oslo', 'Norway'), +(6, '326 Pharma Labs', 'Hyderabad', 'Telangana', 'India'), +(27, '337 Builder Ln', 'Tokyo', 'Tokyo Prefecture', 'Japan'), +(18, '348 Aqua Center', 'Cape Town', 'Western Cape', 'South Africa'), +(19, '359 TechHub Blvd', 'Bangalore', 'Karnataka', 'India'), +(10, '370 Creative Row', 'Stockholm', 'Stockholm County', 'Sweden'), +(21, '381 Robotics Ave', 'Seoul', 'Seoul', 'South Korea'), +(30, '392 Financial Way', 'Zurich', 'Zurich', 'Switzerland'), +(23, '403 Auto Plaza', 'Stuttgart', 'Baden-Württemberg', 'Germany'), +(14, '414 Biotech Blvd', 'Tel Aviv', 'Tel Aviv District', 'Israel'), +(5, '425 Logistics Lane', 'Dubai', 'Dubai', 'United Arab Emirates'); + +INSERT INTO Role (companyID, locationID, roleName, description, skillsRequired) VALUES +(1, 1, 'Machine Learning Engineer', 'Design and implement machine learning models for real-world applications.', 'Python, TensorFlow, Data Analysis'), +(2, 2, 'Sustainability Analyst', 'Analyze and optimize sustainable farming practices.', 'Data Analysis, Agricultural Science'), +(3, 3, 'Aerospace Engineer', 'Develop advanced spacecraft and satellite systems.', 'C++, Aerodynamics, CAD'), +(4, 4, 'Renewable Energy Consultant', 'Consult on renewable energy projects and strategies.', 'Project Management, Solar Technology'), +(5, 5, 'Educational Content Developer', 'Create engaging educational materials for online learning.', 'Curriculum Design, Content Writing'), +(6, 6, 'Clinical Research Scientist', 'Conduct research on pharmaceutical treatments and solutions.', 'Pharmacology, Research Methods'), +(7, 7, 'Architectural Designer', 'Design urban structures and smart city layouts.', 'AutoCAD, Urban Planning'), +(8, 8, 'Water Systems Engineer', 'Develop innovative water purification and desalination systems.', 'Hydraulics, System Design'), +(9, 9, 'Software Developer', 'Build scalable software solutions for clients.', 'Java, SQL, Cloud Platforms'), +(10, 10, 'Video Producer', 'Produce and manage video content for global distribution.', 'Adobe Premiere, Cinematography'), +(11, 11, 'Robotics Engineer', 'Develop robotics solutions for industrial applications.', 'Python, Robotics Frameworks'), +(12, 12, 'Financial Analyst', 'Analyze financial data to provide strategic advice.', 'Excel, Financial Modeling'), +(13, 13, 'Automotive Engineer', 'Design and test autonomous vehicle systems.', 'Matlab, Simulink, AI'), +(14, 14, 'Biotechnology Researcher', 'Conduct research in genetic engineering and biotechnology.', 'CRISPR, Lab Techniques'), +(15, 15, 'Logistics Manager', 'Oversee and optimize supply chain operations.', 'Supply Chain Management, SAP'), +(16, 16, 'Cybersecurity Specialist', 'Protect systems from cyber threats and vulnerabilities.', 'Penetration Testing, Network Security'), +(17, 17, 'Sports Equipment Designer', 'Design high-performance sports equipment.', 'Material Science, CAD'), +(18, 18, 'Eco-Friendly Construction Manager', 'Lead eco-friendly building projects.', 'Project Management, Green Building Standards'), +(19, 19, 'Virtual Reality Developer', 'Develop VR applications for medical training.', 'Unity, C#, 3D Modeling'), +(20, 20, 'Food Product Manager', 'Manage and oversee the development of gourmet food products.', 'Food Science, Marketing'), +(21, 21, 'Interior Designer', 'Create innovative and stylish interior designs.', 'Sketching, 3D Rendering'), +(22, 22, 'Veterinary Product Specialist', 'Develop and promote veterinary products.', 'Animal Science, Product Development'), +(23, 23, 'Tour Guide Manager', 'Coordinate and manage unique travel experiences.', 'Hospitality Management, Customer Service'), +(24, 24, 'Manufacturing Process Engineer', 'Optimize manufacturing processes and workflows.', 'Process Design, Lean Manufacturing'), +(25, 25, 'Data Scientist', 'Analyze big data for actionable insights.', 'Python, Machine Learning, Data Visualization'), +(26, 26, 'Agricultural Engineer', 'Implement innovative agricultural technologies.', 'Mechanical Engineering, Agricultural Systems'), +(27, 27, 'Cloud Infrastructure Engineer', 'Manage and deploy cloud-based solutions.', 'AWS, Kubernetes, Networking'), +(28, 28, 'Artisan Product Designer', 'Design and promote handcrafted artisan products.', 'Creativity, Marketing'), +(29, 29, 'E-commerce Operations Manager', 'Manage e-commerce logistics and operations.', 'Inventory Management, Analytics'), +(30, 30, 'Quantum Computing Researcher', 'Research and develop quantum computing applications.', 'Quantum Mechanics, Algorithms'); + +INSERT INTO Reviews (userID, roleID, publishedAt, reviewType, heading, content, views, likes, isFlagged) VALUES +(1, 1, '2024-01-01 10:00:00', 'InterviewReport', 'Tricky questions about Python', 'It was so hard to figure out their python related questions.', 150, 35, FALSE), +(2, 2, '2024-01-05 15:30:00', 'Feedback', 'Improved Sustainability Practices', 'The company is making significant strides in sustainability, but communication needs improvement.', 100, 20, FALSE), +(3, 3, '2024-01-10 09:45:00', 'Review', 'Exciting Role in Aerospace', 'Working on cutting-edge technology was inspiring. Would recommend it to any aspiring engineer.', 200, 50, TRUE), +(4, 4, '2024-01-15 14:20:00', 'Insight', 'Great Opportunity in Renewable Energy', 'A rewarding experience with excellent leadership and vision.', 120, 25, FALSE), +(5, 5, '2024-01-20 11:10:00', 'Experience', 'Rewarding Work Environment', 'Collaborative culture and strong focus on education made my role enjoyable.', 90, 15, TRUE), +(6, 6, '2024-01-25 17:00:00', 'Feedback', 'Cutting-Edge Research', 'Involved in exciting research but workload was quite heavy.', 140, 30, FALSE), +(7, 7, '2024-02-01 08:00:00', 'Review', 'Urban Development at Its Best', 'Loved working on innovative projects for smart cities.', 85, 10, FALSE), +(8, 8, '2024-02-05 10:30:00', 'Insight', 'Advancing Water Purification', 'Meaningful work but limited opportunities for growth.', 70, 12, FALSE), +(9, 9, '2024-02-10 13:15:00', 'Experience', 'Dynamic Work Environment', 'Fast-paced and challenging, great place for software enthusiasts.', 180, 40, FALSE), +(10, 10, '2024-02-15 16:45:00', 'Feedback', 'Creative and Supportive', 'Perfect workplace for creative professionals.', 75, 18, TRUE), +(11, 11, '2024-02-20 19:20:00', 'Review', 'Robotics Projects Worth Pursuing', 'Exciting projects but management needs improvement.', 95, 20, FALSE), +(12, 12, '2024-02-25 07:30:00', 'Insight', 'Great Start for Financial Analysts', 'Supportive team and ample learning opportunities.', 105, 22, FALSE), +(13, 13, '2024-03-01 11:50:00', 'Experience', 'Innovative Role in Automotive', 'Hands-on experience with cutting-edge technologies.', 130, 35, FALSE), +(14, 14, '2024-03-05 09:40:00', 'Feedback', 'Inspiring Research Environment', 'Focus on innovation, but better work-life balance needed.', 165, 28, FALSE), +(15, 15, '2024-03-10 12:25:00', 'Review', 'Streamlined Logistics Management', 'Efficient processes and a dynamic team.', 80, 14, FALSE), +(16, 16, '2024-03-15 14:00:00', 'Insight', 'Top-notch Cybersecurity Expertise', 'Fantastic workplace for security professionals.', 200, 50, FALSE), +(17, 17, '2024-03-20 15:30:00', 'Experience', 'Challenging Sports Equipment Design', 'Opportunity to innovate, but tight deadlines.', 60, 10, FALSE), +(18, 18, '2024-03-25 16:45:00', 'Feedback', 'Sustainable and Collaborative', 'Loved the eco-friendly approach and teamwork.', 95, 25, FALSE), +(19, 19, '2024-03-30 10:10:00', 'Review', 'Immersive VR Development', 'Great exposure to VR development, but lacked mentorship.', 120, 18, TRUE), +(20, 20, '2024-04-01 13:00:00', 'Insight', 'Delicious Career Growth', 'Enjoyed working on gourmet food projects.', 80, 16, FALSE), +(21, 21, '2024-04-05 14:30:00', 'Experience', 'Creative Interior Design Projects', 'Amazing projects but needs better client communication.', 110, 20, FALSE), +(22, 22, '2024-04-10 09:15:00', 'Feedback', 'Innovative Pet Products', 'Great workplace with a fun and collaborative culture.', 90, 22, FALSE), +(23, 23, '2024-04-15 10:45:00', 'Review', 'Rewarding Travel Role', 'TravelSphere provides ample learning opportunities.', 115, 18, FALSE), +(24, 24, '2024-04-20 11:30:00', 'Insight', 'Streamlined Manufacturing Process', 'High-tech projects but long hours.', 130, 24, FALSE), +(25, 25, '2024-04-25 16:00:00', 'Experience', 'Data-Driven Insights', 'DataPulse offers cutting-edge analytics projects.', 170, 40, FALSE), +(26, 26, '2024-05-01 09:40:00', 'Feedback', 'Smart Agricultural Practices', 'Good place to grow for agricultural engineers.', 140, 35, TRUE), +(27, 27, '2024-05-05 11:30:00', 'Review', 'Innovative Cloud Technologies', 'Great workplace for cloud engineers.', 125, 30, FALSE), +(28, 28, '2024-05-10 14:15:00', 'Insight', 'Artisan Product Design', 'Creative work with room for growth.', 95, 12, FALSE), +(29, 29, '2024-05-15 10:00:00', 'Experience', 'Efficient E-commerce Operations', 'Fast-paced environment with rewarding challenges.', 80, 18, FALSE), +(30, 30, '2024-05-20 15:00:00', 'Feedback', 'Quantum Computing Innovations', 'Fascinating projects but steep learning curve.', 145, 28, FALSE); + +INSERT INTO Reviews (userID, roleID, publishedAt, reviewType, heading, content, views, likes, isFlagged) VALUES +(2, 1, '2024-01-05 15:30:00', 'InterviewReport', 'Machine Learning Frameworks', 'They grilled me on machine learning frameworks like TensorFlow and PyTorch. Be ready for some deep technical discussions.', 130, 40, FALSE), +(3, 2, '2024-01-10 09:45:00', 'InterviewReport', 'Sustainability Practices in Agriculture', 'Expect detailed questions on sustainable farming techniques. They tested my knowledge of real-world agricultural challenges.', 180, 50, TRUE), +(4, 2, '2024-01-15 14:20:00', 'InterviewReport', 'Sustainability Metrics', 'They focused on sustainability metrics and how I would apply them to optimize farming practices.', 120, 30, FALSE), +(5, 3, '2024-01-20 11:10:00', 'InterviewReport', 'Aerospace Engineering Concepts', 'Be prepared for questions on spacecraft propulsion and satellite design. Technical, but stimulating!', 160, 45, FALSE), +(6, 3, '2024-01-25 17:00:00', 'InterviewReport', 'Advanced Aerospace Technology', 'They asked about advanced aerospace technologies and how I would improve existing designs.', 150, 40, FALSE), +(7, 4, '2024-02-01 08:00:00', 'InterviewReport', 'Renewable Energy Challenges', 'They asked about the challenges in renewable energy, especially regarding solar technology. Make sure to know your data.', 140, 38, FALSE), +(8, 4, '2024-02-05 10:30:00', 'InterviewReport', 'Solar Energy Design', 'The interview focused on solar energy design and real-life applications of green energy.', 110, 28, FALSE), +(9, 5, '2024-02-10 13:15:00', 'InterviewReport', 'Curriculum Design in Education', 'They asked how to design curricula for diverse learners, focusing on interactive learning strategies.', 180, 50, TRUE), +(10, 6, '2024-02-15 16:45:00', 'InterviewReport', 'Educational Content Challenges', 'Expect questions about educational content development under time constraints and adapting materials to different learning styles.', 140, 35, FALSE), +(11, 7, '2024-02-20 19:20:00', 'InterviewReport', 'Clinical Trials and Data Analysis', 'The focus was on clinical trial design, including statistical methods and data analysis challenges.', 200, 55, FALSE), +(12, 6, '2024-02-25 07:30:00', 'InterviewReport', 'Pharmaceutical Research Experience', 'Expect to discuss your past experience in pharmaceutical research, including data-driven decision making.', 170, 45, FALSE), +(13, 7, '2024-03-01 11:50:00', 'InterviewReport', 'Urban Development Strategies', 'They asked about sustainable urban development strategies and smart city concepts.', 150, 38, FALSE), +(14, 8, '2024-03-05 09:40:00', 'InterviewReport', 'Smart City Design', 'Expect to discuss your approach to designing smart city infrastructures, focusing on technological integration.', 140, 35, FALSE), +(15, 9, '2024-03-10 12:25:00', 'InterviewReport', 'Water Purification Systems', 'They asked about advanced water purification technologies and challenges in real-world applications.', 130, 33, FALSE), +(16, 10, '2024-03-15 14:00:00', 'InterviewReport', 'Water Systems Engineering Problems', 'The interview involved real-life engineering problems related to water desalination and purification.', 120, 30, TRUE), +(17, 11, '2024-03-20 15:30:00', 'InterviewReport', 'Software Development Process', 'The interview focused on software development methodologies and tools, including problem-solving during live coding challenges.', 200, 55, FALSE), +(18, 12, '2024-03-25 16:45:00', 'InterviewReport', 'Coding Challenges in Software Engineering', 'Expect live coding challenges focusing on algorithms, data structures, and system design.', 180, 50, FALSE), +(19, 13, '2024-03-30 10:10:00', 'InterviewReport', 'Creative Video Production', 'Expect questions on how to manage video projects from pre-production to distribution.', 140, 38, TRUE), +(20, 14, '2024-04-01 13:00:00', 'InterviewReport', 'Creative Content Production Techniques', 'They wanted to know my approach to creative video production, including time management and collaborating with teams under tight deadlines.', 130, 35, FALSE), +(21, 15, '2024-04-05 14:30:00', 'InterviewReport', 'Robotics Engineering Solutions', 'They focused on advanced robotics engineering, including the integration of AI into robotic systems.', 190, 50, TRUE), +(22, 15, '2024-04-10 09:15:00', 'InterviewReport', 'Robotics Project Management', 'Expect questions on managing large-scale robotics projects, especially in an industrial setting.', 180, 48, FALSE), +(23, 16, '2024-04-15 10:45:00', 'InterviewReport', 'Financial Modeling Techniques', 'Expect in-depth questions about financial modeling, with a focus on real-world applications in the financial sector.', 170, 45, FALSE), +(24, 16, '2024-04-20 11:30:00', 'InterviewReport', 'Advanced Financial Analysis', 'They asked me to walk through complex financial analysis and forecasting techniques under tight deadlines.', 160, 42, FALSE), +(25, 17, '2024-04-25 16:00:00', 'InterviewReport', 'Automotive Engineering Challenges', 'The interview focused on automotive systems design and innovative testing methods.', 150, 40, FALSE), +(26, 18, '2024-04-30 09:40:00', 'InterviewReport', 'Vehicle System Design', 'They asked about the design of autonomous vehicle systems and the technologies driving them.', 140, 38, FALSE), +(27, 18, '2024-05-05 11:30:00', 'InterviewReport', 'Biotechnology Research and Development', 'Expect to discuss biotechnology innovations, particularly in gene editing and CRISPR technology.', 160, 45, FALSE), +(28, 19, '2024-05-10 14:15:00', 'InterviewReport', 'Innovations in Genetic Engineering', 'I had to talk about the latest advancements in genetic engineering and their real-world applications.', 170, 50, FALSE), +(29, 20, '2024-05-15 10:00:00', 'InterviewReport', 'Logistics and Supply Chain Optimization', 'Expect to be questioned on logistics challenges and supply chain optimization techniques under pressure.', 130, 35, FALSE), +(30, 21, '2024-05-20 15:00:00', 'InterviewReport', 'Supply Chain Process Optimization', 'They asked how I would optimize supply chain processes and improve operational efficiency.', 140, 38, FALSE); + +INSERT INTO Comments (reviewID, userID, parentCommentID, content, likes, isFlagged) VALUES +(1, 2, NULL, 'This sounds like a fantastic experience! Thanks for sharing.', 10, FALSE), +(1, 3, 1, 'Absolutely agree! I had a similar experience.', 5, FALSE), +(2, 4, NULL, 'Do you think the communication issue is company-wide?', 8, FALSE), +(2, 1, 4, 'Yes, I think it varies by team, but it’s something to improve.', 6, FALSE), +(3, 5, NULL, 'Aerospace is such an exciting field. How was the workload?', 12, FALSE), +(4, 6, NULL, 'I’ve been considering applying here. Any tips for getting in?', 15, TRUE), +(4, 2, 6, 'Focus on renewable energy projects in your portfolio.', 7, FALSE), +(5, 7, NULL, 'I love collaborative environments. Sounds like a great role.', 4, FALSE), +(6, 8, NULL, 'Heavy workloads can be tough. Was the management supportive?', 9, FALSE), +(6, 9, 8, 'They were supportive but often stretched thin.', 3, TRUE), +(7, 10, NULL, 'Smart cities are the future. How innovative were the projects?', 13, TRUE), +(8, 11, NULL, 'What kind of growth opportunities were you hoping for?', 6, FALSE), +(8, 3, 11, 'Leadership roles or cross-functional projects.', 4, FALSE), +(9, 12, NULL, 'Tech companies often have dynamic environments. Did you feel valued?', 11, FALSE), +(10, 13, NULL, 'Supportive workplaces are so important for creatives.', 8, FALSE), +(11, 14, NULL, 'Robotics is fascinating! What kind of robots did you work on?', 14, FALSE), +(11, 15, 14, 'Mostly industrial robots for manufacturing.', 7, FALSE), +(12, 16, NULL, 'Finance is challenging. Did you have a good team?', 10, FALSE), +(13, 17, NULL, 'I’d love to work on autonomous vehicles. Any advice?', 18, FALSE), +(14, 18, NULL, 'How innovative was the genetic research you were involved in?', 20, FALSE), +(14, 19, 18, 'Very cutting-edge, especially in CRISPR technology.', 12, FALSE), +(15, 20, NULL, 'Efficient logistics management is key to success.', 9, FALSE), +(16, 21, NULL, 'What kind of tools were used for cybersecurity?', 11, TRUE), +(16, 22, 21, 'Mostly Splunk, Wireshark, and custom tools.', 5, FALSE), +(17, 23, NULL, 'Tight deadlines can be tough. How was the work-life balance?', 7, FALSE), +(18, 24, NULL, 'Eco-friendly construction is inspiring. What projects stood out?', 14, TRUE), +(19, 25, NULL, 'VR development is fascinating. What applications did you focus on?', 17, FALSE), +(19, 26, 25, 'Medical training simulations. Very impactful.', 10, FALSE), +(20, 27, NULL, 'Gourmet food development sounds interesting! How creative was it?', 8, TRUE), +(21, 28, NULL, 'Interior design is so rewarding. What was your favorite project?', 13, TRUE); + + +INSERT INTO Badges (badgeName) VALUES +('Top Contributor'), +('Expert Reviewer'), +('Helpful Commenter'), +('Insightful Reviewer'), +('Early Adopter'), +('Collaboration Champion'), +('Creative Thinker'), +('Innovation Advocate'), +('Team Player'), +('Knowledge Sharer'), +('Problem Solver'), +('Critical Thinker'), +('Community Builder'), +('Leadership Star'), +('Data Enthusiast'); + + +INSERT INTO UserBadges (userID, badgeID) VALUES +(1, 1), -- User 1: Top Contributor +(1, 2), -- User 1: Expert Reviewer +(2, 3), -- User 2: Helpful Commenter +(2, 4), -- User 2: Insightful Reviewer +(3, 5), -- User 3: Early Adopter +(4, 6), -- User 4: Collaboration Champion +(4, 7), -- User 4: Creative Thinker +(5, 8), -- User 5: Innovation Advocate +(6, 9), -- User 6: Team Player +(6, 10), -- User 6: Knowledge Sharer +(7, 11), -- User 7: Problem Solver +(8, 12), -- User 8: Critical Thinker +(9, 13), -- User 9: Community Builder +(10, 14), -- User 10: Leadership Star +(11, 15), -- User 11: Data Enthusiast +(12, 1), -- User 12: Top Contributor +(13, 4), -- User 13: Insightful Reviewer +(14, 6), -- User 14: Collaboration Champion +(15, 7), -- User 15: Creative Thinker +(16, 8), -- User 16: Innovation Advocate +(17, 3), -- User 17: Helpful Commenter +(18, 10), -- User 18: Knowledge Sharer +(19, 11), -- User 19: Problem Solver +(20, 12), -- User 20: Critical Thinker +(21, 13), -- User 21: Community Builder +(22, 14), -- User 22: Leadership Star +(23, 15), -- User 23: Data Enthusiast +(24, 5), -- User 24: Early Adopter +(25, 9), -- User 25: Team Player +(26, 2); -- User 26: Expert Reviewer + + + +INSERT INTO Feedback (userID, header, content, status) VALUES +(1, 'Great Service', 'I really appreciate the recent updates to the app.', 'In Progress'), +(2, 'Feature Request', 'Could you add a scheduling feature for tasks?', 'Implemented'), +(3, 'Bug Report', 'The app freezes when trying to generate a report.', 'In Progress'), +(4, 'Login Issue', 'Unable to reset my password using the forgot password option.', 'Rejected'), +(5, 'UI Feedback', 'The new layout is great, but the text is too small.', 'Implemented'), +(6, 'Performance Issue', 'The site becomes unresponsive during peak hours.', 'In Progress'), +(7, 'Dark Mode', 'Dark mode would improve usability at night.', 'Implemented'), +(8, 'Security Concern', 'Is there two-factor authentication available?', 'Rejected'), +(9, 'Account Feature', 'Can you provide an option to link multiple accounts?', 'In Progress'), +(10, 'Pricing Feedback', 'The pricing tiers seem unclear to new users.', 'Rejected'), +(11, 'New Feature', 'Adding analytics dashboards would be a great feature.', 'Implemented'), +(12, 'Mobile App Issue', 'The mobile app crashes when scrolling through large lists.', 'In Progress'), +(13, 'Positive Feedback', 'The team is doing a fantastic job with updates.', 'Implemented'), +(14, 'Bug in Search', 'The search function does not return relevant results.', 'In Progress'), +(15, 'User Permissions', 'Please allow admins to set custom permissions.', 'Implemented'), +(16, 'Customer Support', 'Your customer support resolved my issue quickly!', 'Rejected'), +(17, 'Feature Request', 'It would be great to have offline mode in the app.', 'Implemented'), +(18, 'Slow Loading', 'Pages take a long time to load on mobile devices.', 'In Progress'), +(19, 'API Enhancement', 'Can you add more filters to the API endpoints?', 'Rejected'), +(20, 'Email Notifications', 'The email notifications are inconsistent.', 'In Progress'), +(21, 'Dashboard Update', 'I like the new dashboard, but it feels a bit cluttered.', 'Implemented'), +(22, 'Integration Request', 'Can you integrate with XYZ service?', 'Rejected'), +(23, 'Navigation Issue', 'The navigation menu disappears on smaller screens.', 'In Progress'), +(24, 'Search Enhancement', 'Can you add autocomplete to the search bar?', 'Implemented'), +(25, 'Feature Idea', 'How about a collaborative editing feature?', 'In Progress'), +(26, 'UI Feedback', 'The color contrast is not accessible for all users.', 'Rejected'), +(27, 'Report Generation', 'Reports take too long to generate.', 'Implemented'), +(28, 'Settings Option', 'Allow users to export their settings as a file.', 'In Progress'), +(29, 'Appreciation', 'The recent updates have been excellent!', 'Implemented'), +(30, 'Feedback on Beta', 'The beta version is unstable on older devices.', 'Rejected'), +(1, 'Suggestion', 'Add a save as draft option for posts.', 'Implemented'), +(2, 'Account Settings', 'The account settings are difficult to navigate.', 'In Progress'), +(3, 'Feature Request', 'Include a timer for sessions.', 'Rejected'), +(4, 'UI Improvement', 'The text alignment on the settings page is off.', 'In Progress'), +(5, 'Server Downtime', 'The server goes down frequently in my region.', 'Implemented'), +(6, 'Password Management', 'Add a password strength indicator.', 'Rejected'), +(7, 'Onboarding', 'The onboarding process is not very user-friendly.', 'In Progress'), +(8, 'Localization', 'Can you add support for more languages?', 'Implemented'), +(9, 'Social Media', 'Integrate with popular social media platforms.', 'Rejected'), +(10, 'Billing System', 'The billing system has recurring issues.', 'In Progress');