diff --git a/NBSETUP.md b/NBSETUP.md
index b3c683b30..3754f6da4 100644
--- a/NBSETUP.md
+++ b/NBSETUP.md
@@ -20,7 +20,7 @@ We recommend you create a Python virtual environment ([Miniconda](https://conda.
# install just the base SDK
pip install azureml-sdk
-# clone the sample repoistory
+# clone the sample repository
git clone https://github.com/Azure/MachineLearningNotebooks.git
# below steps are optional
@@ -57,10 +57,10 @@ Please make sure you start with the [Configuration](configuration.ipynb) noteboo
You need to have Docker engine installed locally and running. Open a command line window and type the following command.
-__Note:__ We use version `1.0.10` below as an exmaple, but you can replace that with any available version number you like.
+__Note:__ We use version `1.0.10` below as an example, but you can replace that with any available version number you like.
```sh
-# clone the sample repoistory
+# clone the sample repository
git clone https://github.com/Azure/MachineLearningNotebooks.git
# change current directory to the folder
diff --git a/contrib/RAPIDS/azure-ml-with-nvidia-rapids.ipynb b/contrib/RAPIDS/azure-ml-with-nvidia-rapids.ipynb
index 5d45bbf47..8bfb38485 100644
--- a/contrib/RAPIDS/azure-ml-with-nvidia-rapids.ipynb
+++ b/contrib/RAPIDS/azure-ml-with-nvidia-rapids.ipynb
@@ -334,7 +334,7 @@
"source": [
"RunConfiguration is used to submit jobs to Azure Machine Learning service. When creating RunConfiguration for a job, users can either \n",
"1. specify a Docker image with prebuilt conda environment and use it without any modifications to run the job, or \n",
- "2. specify a Docker image as the base image and conda or pip packages as dependnecies to let AML build a new Docker image with a conda environment containing specified dependencies to use in the job\n",
+ "2. specify a Docker image as the base image and conda or pip packages as dependencies to let AML build a new Docker image with a conda environment containing specified dependencies to use in the job\n",
"\n",
"The second option is the recommended option in AML. \n",
"The following steps have code for both options. You can pick the one that is more appropriate for your requirements. "
@@ -351,7 +351,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "The following code shows how to install RAPIDS using conda. The `rapids.yml` file contains the list of packages necessary to run this tutorial. **NOTE:** Initial build of the image might take up to 20 minutes as the service needs to build and cache the new image; once the image is built the subequent runs use the cached image and the overhead is minimal."
+ "The following code shows how to install RAPIDS using conda. The `rapids.yml` file contains the list of packages necessary to run this tutorial. **NOTE:** Initial build of the image might take up to 20 minutes as the service needs to build and cache the new image; once the image is built the subsequent runs use the cached image and the overhead is minimal."
]
},
{
diff --git a/contrib/fairness/fairlearn-azureml-mitigation.ipynb b/contrib/fairness/fairlearn-azureml-mitigation.ipynb
index 68040ed55..0e8f8bf55 100644
--- a/contrib/fairness/fairlearn-azureml-mitigation.ipynb
+++ b/contrib/fairness/fairlearn-azureml-mitigation.ipynb
@@ -266,7 +266,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.\n",
+ "Looking at the disparity in accuracy when we select 'Sex' as the sensitive feature, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunity - males are offered loans at three times the rate of females.\n",
"\n",
"Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact."
]
diff --git a/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb b/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb
index f4697e29d..0295fa9c0 100644
--- a/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb
@@ -313,7 +313,7 @@
"|**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.|\n",
"|**experiment_exit_score**| Value indicating the target for *primary_metric*.
Once the target is surpassed the run terminates.|\n",
"|**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|\n",
- "|**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.|\n",
+ "|**enable_early_stopping**| Flag to enable early termination if the score is not improving in the short term.|\n",
"|**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**training_data**|Input dataset, containing both features and label column.|\n",
diff --git a/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb b/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb
index ba8611610..39a5cc993 100644
--- a/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb
@@ -197,7 +197,7 @@
" \"primary_metric\": \"average_precision_score_weighted\",\n",
" \"enable_early_stopping\": True,\n",
" \"max_concurrent_iterations\": 2, # This is a limit for testing purpose, please increase it as per cluster size\n",
- " \"experiment_timeout_hours\": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ablity to find the best model possible\n",
+ " \"experiment_timeout_hours\": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible\n",
" \"verbosity\": logging.INFO,\n",
"}\n",
"\n",
diff --git a/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb b/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb
index da4718180..f9f2d461e 100644
--- a/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb
@@ -22,7 +22,7 @@
"metadata": {},
"source": [
"## Introduction\n",
- "In this example we use AutoML and Pipelines to enable contious retraining of a model based on updates to the training dataset. We will create two pipelines, the first one to demonstrate a training dataset that gets updated over time. We leverage time-series capabilities of `TabularDataset` to achieve this. The second pipeline utilizes pipeline `Schedule` to trigger continuous retraining. \n",
+ "In this example we use AutoML and Pipelines to enable continuous retraining of a model based on updates to the training dataset. We will create two pipelines, the first one to demonstrate a training dataset that gets updated over time. We leverage time-series capabilities of `TabularDataset` to achieve this. The second pipeline utilizes pipeline `Schedule` to trigger continuous retraining. \n",
"Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.\n",
"In this notebook you will learn how to:\n",
"* Create an Experiment in an existing Workspace.\n",
@@ -90,7 +90,7 @@
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
- "dstor = ws.get_default_datastore()\n",
+ "dstore = ws.get_default_datastore()\n",
"\n",
"# Choose a name for the run history container in the workspace.\n",
"experiment_name = \"retrain-noaaweather\"\n",
@@ -367,13 +367,13 @@
"\n",
"metrics_data = PipelineData(\n",
" name=\"metrics_data\",\n",
- " datastore=dstor,\n",
+ " datastore=dstore,\n",
" pipeline_output_name=metrics_output_name,\n",
" training_output=TrainingOutput(type=\"Metrics\"),\n",
")\n",
"model_data = PipelineData(\n",
" name=\"model_data\",\n",
- " datastore=dstor,\n",
+ " datastore=dstore,\n",
" pipeline_output_name=best_model_output_name,\n",
" training_output=TrainingOutput(type=\"Model\"),\n",
")"
@@ -503,7 +503,7 @@
" pipeline_parameters={\"ds_name\": dataset, \"model_name\": \"noaaweatherds\"},\n",
" pipeline_id=published_pipeline.id,\n",
" experiment_name=experiment_name,\n",
- " datastore=dstor,\n",
+ " datastore=dstore,\n",
" wait_for_provisioning=True,\n",
" polling_interval=1440,\n",
")"
@@ -550,7 +550,7 @@
" pipeline_parameters={\"ds_name\": dataset},\n",
" pipeline_id=published_pipeline.id,\n",
" experiment_name=experiment_name,\n",
- " datastore=dstor,\n",
+ " datastore=dstore,\n",
" wait_for_provisioning=True,\n",
" polling_interval=1440,\n",
")"
diff --git a/how-to-use-azureml/automated-machine-learning/continuous-retraining/upload_weather_data.py b/how-to-use-azureml/automated-machine-learning/continuous-retraining/upload_weather_data.py
index 28f30a65b..b3080fd10 100644
--- a/how-to-use-azureml/automated-machine-learning/continuous-retraining/upload_weather_data.py
+++ b/how-to-use-azureml/automated-machine-learning/continuous-retraining/upload_weather_data.py
@@ -103,7 +103,7 @@ def get_noaa_data(start_time, end_time):
print("Argument 1(ds_name): %s" % args.ds_name)
-dstor = ws.get_default_datastore()
+dstore = ws.get_default_datastore()
register_dataset = False
end_time = datetime.utcnow()
@@ -143,7 +143,7 @@ def get_noaa_data(start_time, end_time):
os.makedirs(folder_name, exist_ok=True)
train_df.to_csv(file_path, index=False)
- dstor.upload_files(
+ dstore.upload_files(
files=[file_path], target_path=folder_name, overwrite=True, show_progress=True
)
else:
@@ -151,7 +151,7 @@ def get_noaa_data(start_time, end_time):
if register_dataset:
ds = Dataset.Tabular.from_delimited_files(
- dstor.path("{}/**/*.csv".format(args.ds_name)),
+ dstore.path("{}/**/*.csv".format(args.ds_name)),
partition_format="/{partition_date:yyyy/MM/dd/HH/mm/ss}/data.csv",
)
ds.register(ws, name=args.ds_name)
diff --git a/how-to-use-azureml/automated-machine-learning/experimental/regression-model-proxy/auto-ml-regression-model-proxy.ipynb b/how-to-use-azureml/automated-machine-learning/experimental/regression-model-proxy/auto-ml-regression-model-proxy.ipynb
index a6c9ec670..89f8b0214 100644
--- a/how-to-use-azureml/automated-machine-learning/experimental/regression-model-proxy/auto-ml-regression-model-proxy.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/experimental/regression-model-proxy/auto-ml-regression-model-proxy.ipynb
@@ -185,7 +185,7 @@
"metadata": {},
"source": [
"The split data will be used in the remote compute by ModelProxy and locally to compare results.\n",
- "So, we need to persist the split data to avoid descrepencies from different package versions in the local and remote."
+ "So, we need to persist the split data to avoid discrepancies from different package versions in the local and remote."
]
},
{
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb
index 36f4eb4f0..1e867a7f3 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb
@@ -360,7 +360,7 @@
" \"track_child_runs\": False,\n",
"}\n",
"\n",
- "mm_paramters = ManyModelsTrainParameters(\n",
+ "mm_parameters = ManyModelsTrainParameters(\n",
" automl_settings=automl_settings, partition_column_names=partition_column_names\n",
")"
]
@@ -405,7 +405,7 @@
" node_count=2,\n",
" process_count_per_node=2,\n",
" run_invocation_timeout=920,\n",
- " train_pipeline_parameters=mm_paramters,\n",
+ " train_pipeline_parameters=mm_parameters,\n",
")"
]
},
@@ -506,7 +506,7 @@
"| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku). |\n",
"| **process_count_per_node** | The number of processes per node.\n",
"| **train_run_id** | \\[Optional\\] The run id of the hierarchy training, by default it is the latest successful training many model run in the experiment. |\n",
- "| **train_experiment_name** | \\[Optional\\] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |\n",
+ "| **train_experiment_name** | \\[Optional\\] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiment as the inference pipeline. |\n",
"| **process_count_per_node** | \\[Optional\\] The number of processes per node, by default it's 4. |"
]
},
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/assets/data_split.py b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/assets/data_split.py
index c9b6b8a89..46833465e 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/assets/data_split.py
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/assets/data_split.py
@@ -22,7 +22,7 @@
parsed_args, _ = parser.parse_known_args()
step_number = int(parsed_args.step_number)
step_size = int(parsed_args.step_size)
-# Create the working dirrectory to store the temporary csv files.
+# Create the working directory to store the temporary csv files.
working_dir = parsed_args.out_dir
os.makedirs(working_dir, exist_ok=True)
# Set input and output
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb
index 69e4e3199..aac7b7571 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb
@@ -515,7 +515,7 @@
"source": [
"# Backtest the best model \n",
"\n",
- "For model backtesting we will use the same parameters we used to backtest AutoML. All the models, we have obtained in the previous run were registered in our workspace. To identify the model, each was assigned a tag with the last trainig date."
+ "For model backtesting we will use the same parameters we used to backtest AutoML. All the models, we have obtained in the previous run were registered in our workspace. To identify the model, each was assigned a tag with the last training date."
]
},
{
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb
index fd03dee10..f9efc7cd7 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb
@@ -185,7 +185,7 @@
"\n",
"We will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. \n",
"\n",
- "With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasets#dataset-types) to be used training and prediction."
+ "With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the dataset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasets#dataset-types) to be used training and prediction."
]
},
{
@@ -329,7 +329,7 @@
"|**label_column_name**|The name of the label column.|\n",
"|**compute_target**|The remote compute for training.|\n",
"|**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.|\n",
- "|**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.|\n",
+ "|**enable_early_stopping**|Flag to enable early termination if the score is not improving in the short term.|\n",
"|**forecasting_parameters**|A class holds all the forecasting related parameters.|\n"
]
},
@@ -504,7 +504,7 @@
"metadata": {},
"source": [
"### Retrieving forecasts from the model\n",
- "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute."
+ "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and executed on the remote computer."
]
},
{
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/run_forecast.py b/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/run_forecast.py
index cb1d9d886..c141b105f 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/run_forecast.py
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/run_forecast.py
@@ -11,7 +11,7 @@ def run_remote_inference(
target_column_name,
inference_folder="./forecast",
):
- # Create local directory to copy the model.pkl and forecsting_script.py files into.
+ # Create local directory to copy the model.pkl and forecasting_script.py files into.
# These files will be uploaded to and executed on the compute instance.
os.makedirs(inference_folder, exist_ok=True)
shutil.copy("forecasting_script.py", inference_folder)
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.ipynb
index 04d4de4a2..28f190ba4 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.ipynb
@@ -437,7 +437,7 @@
"source": [
"# The data set contains hourly data, the training set ends at 01/02/2000 at 05:00\n",
"\n",
- "# These are predictions we are asking the model to make (does not contain thet target column y),\n",
+ "# These are predictions we are asking the model to make (does not contain the target column y),\n",
"# for 6 periods beginning with 2000-01-02 06:00, which immediately follows the training data\n",
"X_test"
]
@@ -765,7 +765,7 @@
"\n",
"\n",
"\n",
- "Internally, we apply the forecaster in an iterative manner and finish the forecast task in two interations. In the first iteration, we apply the forecaster and get the prediction for the first forecast-horizon periods (y_pred1). In the second iteraction, y_pred1 is used as the context to produce the prediction for the next forecast-horizon periods (y_pred2). The combination of (y_pred1 and y_pred2) gives the results for the total forecast periods. \n",
+ "Internally, we apply the forecaster in an iterative manner and finish the forecast task in two iterations. In the first iteration, we apply the forecaster and get the prediction for the first forecast-horizon periods (y_pred1). In the second iteration, y_pred1 is used as the context to produce the prediction for the next forecast-horizon periods (y_pred2). The combination of (y_pred1 and y_pred2) gives the results for the total forecast periods. \n",
"\n",
"A caveat: forecast accuracy will likely be worse the farther we predict into the future since errors are compounded with recursive application of the forecaster.\n",
"\n",
@@ -840,7 +840,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Similarly with the simple senarios illustrated above, forecasting farther than the forecast horizon in other senarios like 'multiple time-series', 'Destination-date forecast', and 'forecast away from the training data' are also automatically handled by the `forecast()` function. "
+ "Similarly with the simple scenarios illustrated above, forecasting farther than the forecast horizon in other scenarios like 'multiple time-series', 'Destination-date forecast', and 'forecast away from the training data' are also automatically handled by the `forecast()` function. "
]
}
],
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb
index 805940048..4bd133bc3 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb
@@ -299,7 +299,7 @@
"\n",
"train, valid = split_full_for_forecasting(df, time_column_name)\n",
"\n",
- "# Reset index to create a Tabualr Dataset.\n",
+ "# Reset index to create a Tabular Dataset.\n",
"train.reset_index(inplace=True)\n",
"valid.reset_index(inplace=True)\n",
"test_df.reset_index(inplace=True)\n",
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb
index 41732c89a..7835b97f8 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb
@@ -30,7 +30,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "For this notebook we are using a synthetic dataset portraying sales data to predict the the quantity of a vartiety of product skus across several states, stores, and product categories.\n",
+ "For this notebook we are using a synthetic dataset portraying sales data to predict the the quantity of a variety of product skus across several states, stores, and product categories.\n",
"\n",
"**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**"
]
@@ -251,7 +251,7 @@
"source": [
"### Set up training parameters\n",
"\n",
- "This dictionary defines the AutoML and hierarchy settings. For this forecasting task we need to define several settings inncluding the name of the time column, the maximum forecast horizon, the hierarchy definition, and the level of the hierarchy at which to train.\n",
+ "This dictionary defines the AutoML and hierarchy settings. For this forecasting task we need to define several settings including the name of the time column, the maximum forecast horizon, the hierarchy definition, and the level of the hierarchy at which to train.\n",
"\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
@@ -449,19 +449,19 @@
"import os\n",
"\n",
"if model_explainability:\n",
- " explanations_dirrectory = os.listdir(\n",
+ " explanations_directory = os.listdir(\n",
" os.path.join(\"training_explanations\", \"azureml\")\n",
" )\n",
- " if len(explanations_dirrectory) > 1:\n",
+ " if len(explanations_directory) > 1:\n",
" print(\n",
" \"Warning! The directory contains multiple explanations, only the first one will be displayed.\"\n",
" )\n",
- " print(\"The explanations are located at {}.\".format(explanations_dirrectory[0]))\n",
+ " print(\"The explanations are located at {}.\".format(explanations_directory[0]))\n",
" # Now we will list all the explanations.\n",
" explanation_path = os.path.join(\n",
" \"training_explanations\",\n",
" \"azureml\",\n",
- " explanations_dirrectory[0],\n",
+ " explanations_directory[0],\n",
" \"training_explanations\",\n",
" )\n",
" print(\"Available explanations\")\n",
@@ -518,7 +518,7 @@
"* **node_count:** The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku).\n",
"* **process_count_per_node:** The number of processes per node.\n",
"* **train_run_id:** \\[Optional] The run id of the hierarchy training, by default it is the latest successful training hts run in the experiment.\n",
- "* **train_experiment_name:** \\[Optional] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline.\n",
+ "* **train_experiment_name:** \\[Optional] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiment as the inference pipeline.\n",
"* **process_count_per_node:** \\[Optional] The number of processes per node, by default it's 4."
]
},
@@ -589,7 +589,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Resbumit the Pipeline\n",
+ "## Resubmit the Pipeline\n",
"\n",
"The inference pipeline can be submitted with different configurations."
]
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/auto-ml-forecasting-many-models.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/auto-ml-forecasting-many-models.ipynb
index 7ac404ae9..089e9847c 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/auto-ml-forecasting-many-models.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/auto-ml-forecasting-many-models.ipynb
@@ -30,7 +30,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "For this notebook we are using a synthetic dataset portraying sales data to predict the the quantity of a vartiety of product skus across several states, stores, and product categories.\n",
+ "For this notebook we are using a synthetic dataset portraying sales data to predict the the quantity of a variety of product skus across several states, stores, and product categories.\n",
"\n",
"**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**"
]
@@ -216,7 +216,7 @@
"source": [
"#### 2.3 Using tabular datasets \n",
"\n",
- "Now that the datastore is available from the Workspace, [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py) can be created. Datasets in Azure Machine Learning are references to specific data in a Datastore. We are using TabularDataset, so that users who have their data which can be in one or many files (*.parquet or *.csv) and have not split up data according to group columns needed for training, can do so using out of box support for 'partiion_by' feature of TabularDataset shown in section 5.0 below."
+ "Now that the datastore is available from the Workspace, [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py) can be created. Datasets in Azure Machine Learning are references to specific data in a Datastore. We are using TabularDataset, so that users who have their data which can be in one or many files (*.parquet or *.csv) and have not split up data according to group columns needed for training, can do so using out of box support for 'partition_by' feature of TabularDataset shown in section 5.0 below."
]
},
{
@@ -309,7 +309,7 @@
"source": [
"### Set up training parameters\n",
"\n",
- "This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings inncluding the name of the time column, the maximum forecast horizon, and the partition column name definition.\n",
+ "This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings including the name of the time column, the maximum forecast horizon, and the partition column name definition.\n",
"\n",
"| Property | Description|\n",
"| :--------------- | :------------------- |\n",
@@ -361,7 +361,7 @@
" \"track_child_runs\": False,\n",
"}\n",
"\n",
- "mm_paramters = ManyModelsTrainParameters(\n",
+ "mm_parameters = ManyModelsTrainParameters(\n",
" automl_settings=automl_settings, partition_column_names=partition_column_names\n",
")"
]
@@ -406,7 +406,7 @@
" node_count=2,\n",
" process_count_per_node=8,\n",
" run_invocation_timeout=920,\n",
- " train_pipeline_parameters=mm_paramters,\n",
+ " train_pipeline_parameters=mm_parameters,\n",
")"
]
},
@@ -559,7 +559,7 @@
"| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku). |\n",
"| **process_count_per_node** The number of processes per node.\n",
"| **train_run_id** | \\[Optional] The run id of the hierarchy training, by default it is the latest successful training many model run in the experiment. |\n",
- "| **train_experiment_name** | \\[Optional] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |\n",
+ "| **train_experiment_name** | \\[Optional] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiment as the inference pipeline. |\n",
"| **process_count_per_node** | \\[Optional] The number of processes per node, by default it's 4. |"
]
},
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb
index f6dc3990a..a89212ebc 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb
@@ -550,7 +550,7 @@
"metadata": {},
"source": [
"### Retrieving forecasts from the model\n",
- "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute."
+ "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and executed on the remote computer."
]
},
{
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/run_forecast.py b/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/run_forecast.py
index cb1d9d886..c141b105f 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/run_forecast.py
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/run_forecast.py
@@ -11,7 +11,7 @@ def run_remote_inference(
target_column_name,
inference_folder="./forecast",
):
- # Create local directory to copy the model.pkl and forecsting_script.py files into.
+ # Create local directory to copy the model.pkl and forecasting_script.py files into.
# These files will be uploaded to and executed on the compute instance.
os.makedirs(inference_folder, exist_ok=True)
shutil.copy("forecasting_script.py", inference_folder)
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-pipelines/auto-ml-forecasting-pipelines.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-pipelines/auto-ml-forecasting-pipelines.ipynb
index f3e27d9df..bb7c63c8c 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-pipelines/auto-ml-forecasting-pipelines.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-pipelines/auto-ml-forecasting-pipelines.ipynb
@@ -13,11 +13,11 @@
"source": [
"## Introduction\n",
"\n",
- "In this notebook, we demonstrate how to use piplines to train and inference on AutoML Forecasting model. Two pipelines will be created: one for training AutoML model, and the other is for inference on AutoML model. We'll also demonstrate how to schedule the inference pipeline so you can get inference results periodically (with refreshed test dataset). Make sure you have executed the configuration notebook before running this notebook. In this notebook you will learn how to:\n",
+ "In this notebook, we demonstrate how to use pipelines to train and inference on AutoML Forecasting model. Two pipelines will be created: one for training AutoML model, and the other is for inference on AutoML model. We'll also demonstrate how to schedule the inference pipeline so you can get inference results periodically (with refreshed test dataset). Make sure you have executed the configuration notebook before running this notebook. In this notebook you will learn how to:\n",
"\n",
"- Configure AutoML using AutoMLConfig for forecasting tasks using pipeline AutoMLSteps.\n",
"- Create and register an AutoML model using AzureML pipeline.\n",
- "- Inference and schdelue the pipeline using registered model."
+ "- Inference and schedule the pipeline using registered model."
]
},
{
@@ -95,7 +95,7 @@
"outputs": [],
"source": [
"ws = Workspace.from_config()\n",
- "dstor = ws.get_default_datastore()\n",
+ "dstore = ws.get_default_datastore()\n",
"\n",
"# Choose a name for the run history container in the workspace.\n",
"experiment_name = \"forecasting-pipeline\"\n",
@@ -586,7 +586,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "[Optional] The enviroment can also be assessed from the training run using `get_environment()` API."
+ "[Optional] The environment can also be assessed from the training run using `get_environment()` API."
]
},
{
@@ -639,7 +639,7 @@
" arguments=[\n",
" \"--model_name\",\n",
" model_name_str,\n",
- " \"--ouput_dataset_name\",\n",
+ " \"--output_dataset_name\",\n",
" output_ds_name,\n",
" \"--test_dataset_name\",\n",
" test_dataset.name,\n",
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-pipelines/scripts/infer.py b/how-to-use-azureml/automated-machine-learning/forecasting-pipelines/scripts/infer.py
index 16b22f664..b549d914a 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-pipelines/scripts/infer.py
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-pipelines/scripts/infer.py
@@ -78,9 +78,9 @@ def get_args():
)
parser.add_argument(
- "--ouput_dataset_name",
+ "--output_dataset_name",
type=str,
- dest="ouput_dataset_name",
+ dest="output_dataset_name",
default="results",
help="Dataset name of the final output",
)
@@ -143,14 +143,14 @@ def get_model_filename(run, model_name, model_path):
args = get_args()
model_name = args.model_name
- ouput_dataset_name = args.ouput_dataset_name
+ output_dataset_name = args.output_dataset_name
test_dataset_name = args.test_dataset_name
target_column_name = args.target_column_name
print("args passed are: ")
print(model_name)
print(test_dataset_name)
- print(ouput_dataset_name)
+ print(output_dataset_name)
print(target_column_name)
model_path = Model.get_model_path(model_name)
@@ -166,5 +166,5 @@ def get_model_filename(run, model_name, model_path):
)
infer_forecasting_dataset_tcn(
- X_test_df, y_test, fitted_model, args.output_path, ouput_dataset_name
+ X_test_df, y_test, fitted_model, args.output_path, output_dataset_name
)
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb
index 2e773fbdf..9da5478ad 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb
@@ -20,7 +20,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "In this notebook we will explore the univaraite time-series data to determine the settings for an automated ML experiment. We will follow the thought process depicted in the following diagram:
\n",
+ "In this notebook we will explore the univariate time-series data to determine the settings for an automated ML experiment. We will follow the thought process depicted in the following diagram:
\n",
"\n",
"\n",
"The objective is to answer the following questions:\n",
@@ -32,11 +32,11 @@
" \n",
"
Is the data stationary? \n",
" \n",
- " - Importance: In the absense of features that capture trend behavior, ML models (regression and tree based) are not well equiped to predict stochastic trends. Working with stationary data solves this problem.
\n",
+ " - Importance: In the absense of features that capture trend behavior, ML models (regression and tree based) are not well equipped to predict stochastic trends. Working with stationary data solves this problem.
\n",
"
\n",
" Is there a detectable auto-regressive pattern in the stationary data? \n",
" \n",
- " - Importance: The accuracy of ML models can be improved if serial correlation is modeled by including lags of the dependent/target varaible as features. Including target lags in every experiment by default will result in a regression in accuracy scores if such setting is not warranted.
\n",
+ " - Importance: The accuracy of ML models can be improved if serial correlation is modeled by including lags of the dependent/target variable as features. Including target lags in every experiment by default will result in a regression in accuracy scores if such setting is not warranted.
\n",
"
\n",
"\n",
"\n",
@@ -109,7 +109,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "The graph plots the alcohol sales in the United States. Because the data is trending, it can be difficult to see cycles, seasonality or other interestng behaviors due to the scaling issues. For example, if there is a seasonal pattern, which we will discuss later, we cannot see them on the trending data. In such case, it is worth plotting the same data in first differences."
+ "The graph plots the alcohol sales in the United States. Because the data is trending, it can be difficult to see cycles, seasonality or other interesting behaviors due to the scaling issues. For example, if there is a seasonal pattern, which we will discuss later, we cannot see them on the trending data. In such case, it is worth plotting the same data in first differences."
]
},
{
@@ -336,7 +336,7 @@
"metadata": {},
"source": [
"# 3 Check if there is a clear autoregressive pattern\n",
- "We need to determine if we should include lags of the target variable as features in order to improve forecast accuracy. To do this, we will examine the ACF and partial ACF (PACF) plots of the stationary series. In our case, it is a series in first diffrences.\n",
+ "We need to determine if we should include lags of the target variable as features in order to improve forecast accuracy. To do this, we will examine the ACF and partial ACF (PACF) plots of the stationary series. In our case, it is a series in first differences.\n",
"\n",
"\n",
" - Question: What is an Auto-regressive pattern? What are we looking for?
\n",
@@ -418,11 +418,11 @@
" \n",
" where $\\sigma_{xzy}$ is the covariance between two random variables $X$ and $Z$; $\\sigma_x$ and $\\sigma_z$ is the variance for $X$ and $Z$, respectively. The correlation coefficient measures the strength of linear relationship between two random variables. This metric can take any value from -1 to 1. \n",
"
\n",
- " - The auto-correlation coefficient $\\rho_{Y_{t} Y_{t-k}}$ is the time series equivalent of the correlation coefficient, except instead of measuring linear association between two random variables $X$ and $Z$, it measures the strength of a linear relationship between a random variable $Y_t$ and its lag $Y_{t-k}$ for any positive interger value of $k$.
\n",
+ " - The auto-correlation coefficient $\\rho_{Y_{t} Y_{t-k}}$ is the time series equivalent of the correlation coefficient, except instead of measuring linear association between two random variables $X$ and $Z$, it measures the strength of a linear relationship between a random variable $Y_t$ and its lag $Y_{t-k}$ for any positive integer value of $k$.
\n",
"
\n",
- " - To visualize the ACF for a particular lag, say lag 2, plot the second lag of a series $y_{t-2}$ on the x-axis, and plot the series itself $y_t$ on the y-axis. The autocorrelation coefficient is the slope of the best fitted regression line and can be interpreted as follows. A one unit increase in the lag of a variable one period ago leads to a $\\rho_{Y_{t} Y_{t-2}}$ units change in the variable in the current period. This interpreation can be applied to any lag.
\n",
+ " - To visualize the ACF for a particular lag, say lag 2, plot the second lag of a series $y_{t-2}$ on the x-axis, and plot the series itself $y_t$ on the y-axis. The autocorrelation coefficient is the slope of the best fitted regression line and can be interpreted as follows. A one unit increase in the lag of a variable one period ago leads to a $\\rho_{Y_{t} Y_{t-2}}$ units change in the variable in the current period. This interpretation can be applied to any lag.
\n",
"
\n",
- " - In the interpretation posted above we need to be careful not to confuse the word \"leads\" with \"causes\" since these are not the same thing. We do not know the lagged value of the varaible causes it to change. Afterall, there are probably many other features that may explain the movement in $Y_t$. All we are trying to do in this section is to identify situations when the variable contains the strong auto-regressive components that needs to be included in the model to improve forecast accuracy.
\n",
+ " - In the interpretation posted above we need to be careful not to confuse the word \"leads\" with \"causes\" since these are not the same thing. We do not know the lagged value of the variable causes it to change. Afterall, there are probably many other features that may explain the movement in $Y_t$. All we are trying to do in this section is to identify situations when the variable contains the strong auto-regressive components that needs to be included in the model to improve forecast accuracy.
\n",
"
\n",
""
]
@@ -434,7 +434,7 @@
"\n",
" - Question: What is the PACF?
\n",
" \n",
- " - When describing the ACF we essentially running a regression between a partigular lag of a series, say, lag 4, and the series itself. What this implies is the regression coefficient for lag 4 captures the impact of everything that happens in lags 1, 2 and 3. In other words, if lag 1 is the most important lag and we exclude it from the regression, naturally, the regression model will assign the importance of the 1st lag to the 4th one. Partial auto-correlation function fixes this problem since it measures the contribution of each lag accounting for the information added by the intermediary lags. If we were to illustrate ACF and PACF for the fourth lag using the regression analogy, the difference is a follows: \n",
+ "
- When describing the ACF we essentially running a regression between a particular lag of a series, say, lag 4, and the series itself. What this implies is the regression coefficient for lag 4 captures the impact of everything that happens in lags 1, 2 and 3. In other words, if lag 1 is the most important lag and we exclude it from the regression, naturally, the regression model will assign the importance of the 1st lag to the 4th one. Partial auto-correlation function fixes this problem since it measures the contribution of each lag accounting for the information added by the intermediary lags. If we were to illustrate ACF and PACF for the fourth lag using the regression analogy, the difference is a follows: \n",
" \\begin{align}\n",
" Y_{t} &= a_{0} + a_{4} Y_{t-4} + e_{t} \\\\\n",
" Y_{t} &= b_{0} + b_{1} Y_{t-1} + b_{2} Y_{t-2} + b_{3} Y_{t-3} + b_{4} Y_{t-4} + \\varepsilon_{t} \\\\\n",
@@ -442,7 +442,7 @@
"
\n",
"
\n",
" - \n",
- " Here, you can think of $a_4$ and $b_{4}$ as the auto- and partial auto-correlation coefficients for lag 4. Notice, in the second equation we explicitely accounting for the intermediate lags by adding them as regrerssors.\n",
+ " Here, you can think of $a_4$ and $b_{4}$ as the auto- and partial auto-correlation coefficients for lag 4. Notice, in the second equation we explicitly accounting for the intermediate lags by adding them as regrerssors.\n",
"
\n",
"
\n",
"
"
@@ -457,7 +457,7 @@
" \n",
" - We are looking for a classical profiles for an AR(p) process such as an exponential decay of an ACF and a the first $p$ significant lags of the PACF. Let's examine the ACF/PACF profiles of the same simulated AR(2) shown in Section 3, and check if the ACF/PACF explanation are refelcted in these plots.
\n",
"
\n",
- " - The autocorrelation coefficient for the 3rd lag is 0.6, which can be interpreted that a one unit increase in the value of the target varaible three periods ago leads to 0.6 units increase in the current period. However, the PACF plot shows that the partial autocorrealtion coefficient is zero (from a statistical point of view since it lies within the shaded region). This is happening because the 1st and 2nd lags are good predictors of the target variable. Ommiting these two lags from the regression results in the misleading conclusion that the third lag is a good prediciton.
\n",
+ " - The autocorrelation coefficient for the 3rd lag is 0.6, which can be interpreted that a one unit increase in the value of the target variable three periods ago leads to 0.6 units increase in the current period. However, the PACF plot shows that the partial autocorrealtion coefficient is zero (from a statistical point of view since it lies within the shaded region). This is happening because the 1st and 2nd lags are good predictors of the target variable. Ommiting these two lags from the regression results in the misleading conclusion that the third lag is a good prediciton.
\n",
"
\n",
" - This is why it is important to examine both the ACF and the PACF plots when tring to determine the auto regressive order for the variable in question.
\n",
"
\n",
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-run-experiment.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-run-experiment.ipynb
index ba2c57855..80bdce4d7 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-run-experiment.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-run-experiment.ipynb
@@ -425,8 +425,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Retreiving forecasts from the model\n",
- "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute."
+ "## Retrieving forecasts from the model\n",
+ "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and executed on the remote computer."
]
},
{
@@ -453,7 +453,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "### Download the prediction result for metrics calcuation\n",
+ "### Download the prediction result for metrics calculation\n",
"The test data with predictions are saved in artifact `outputs/predictions.csv`. We will use it to calculate accuracy metrics and vizualize predictions versus actuals."
]
},
@@ -524,7 +524,7 @@
"metrics_df = compute_metrics(fcst_df=fcst_df, metric_name=None, ts_id_colnames=None)\n",
"# save output\n",
"metrics_file_name = \"{}_metrics.csv\".format(experiment_name)\n",
- "fcst_file_name = \"{}_forecst.csv\".format(experiment_name)\n",
+ "fcst_file_name = \"{}_forecast.csv\".format(experiment_name)\n",
"plot_file_name = \"{}_plot.pdf\".format(experiment_name)\n",
"\n",
"metrics_df.to_csv(os.path.join(output_dir, metrics_file_name), index=True)\n",
diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/run_forecast.py b/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/run_forecast.py
index cb1d9d886..c141b105f 100644
--- a/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/run_forecast.py
+++ b/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/run_forecast.py
@@ -11,7 +11,7 @@ def run_remote_inference(
target_column_name,
inference_folder="./forecast",
):
- # Create local directory to copy the model.pkl and forecsting_script.py files into.
+ # Create local directory to copy the model.pkl and forecasting_script.py files into.
# These files will be uploaded to and executed on the compute instance.
os.makedirs(inference_folder, exist_ok=True)
shutil.copy("forecasting_script.py", inference_folder)
diff --git a/how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb b/how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb
index ccf908a3e..ce2677ad5 100644
--- a/how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb
@@ -774,7 +774,7 @@
"metadata": {},
"source": [
"#### Consume the web service using run method to do the scoring and explanation of scoring.\n",
- "We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
+ "We test the web service by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
]
},
{
diff --git a/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb b/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb
index eb343b5ca..4bd9cc754 100644
--- a/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb
@@ -194,7 +194,7 @@
"|**task**|classification, regression or forecasting|\n",
"|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics:
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error|\n",
"|**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|\n",
- "|**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.|\n",
+ "|**enable_early_stopping**| Flag to enable early termination if the score is not improving in the short term.|\n",
"|**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*. Note: If the input data is sparse, featurization cannot be turned on.|\n",
"|**n_cross_validations**|Number of cross validation splits.|\n",
"|**training_data**|(sparse) array-like, shape = [n_samples, n_features]|\n",
@@ -749,7 +749,7 @@
"metadata": {},
"source": [
"### Inference using some test data\n",
- "Inference using some test data to see the predicted value from autml model, view the engineered feature importance for the predicted value and raw feature importance for the predicted value."
+ "Inference using some test data to see the predicted value from automl model, view the engineered feature importance for the predicted value and raw feature importance for the predicted value."
]
},
{
diff --git a/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.ipynb b/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.ipynb
index e168f1248..68161c3c2 100644
--- a/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.ipynb
+++ b/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.ipynb
@@ -191,7 +191,7 @@
" \"n_cross_validations\": 3,\n",
" \"primary_metric\": \"r2_score\",\n",
" \"enable_early_stopping\": True,\n",
- " \"experiment_timeout_hours\": 0.3, # for real scenarios we reccommend a timeout of at least one hour\n",
+ " \"experiment_timeout_hours\": 0.3, # for real scenarios we recommend a timeout of at least one hour\n",
" \"max_concurrent_iterations\": 4,\n",
" \"max_cores_per_iteration\": -1,\n",
" \"verbosity\": logging.INFO,\n",
diff --git a/how-to-use-azureml/azure-databricks/automl/automl-databricks-local-01.ipynb b/how-to-use-azureml/azure-databricks/automl/automl-databricks-local-01.ipynb
index 10ccdecbb..9f1fbb462 100644
--- a/how-to-use-azureml/azure-databricks/automl/automl-databricks-local-01.ipynb
+++ b/how-to-use-azureml/azure-databricks/automl/automl-databricks-local-01.ipynb
@@ -47,7 +47,7 @@
"metadata": {},
"source": [
"## Register Machine Learning Services Resource Provider\n",
- "Microsoft.MachineLearningServices only needs to be registed once in the subscription. To register it:\n",
+ "Microsoft.MachineLearningServices only needs to be registered once in the subscription. To register it:\n",
"Start the Azure portal.\n",
"Select your All services and then Subscription.\n",
"Select the subscription that you want to use.\n",
diff --git a/how-to-use-azureml/azure-databricks/automl/automl-databricks-local-with-deployment.ipynb b/how-to-use-azureml/azure-databricks/automl/automl-databricks-local-with-deployment.ipynb
index 5865afc15..cf99b04e5 100644
--- a/how-to-use-azureml/azure-databricks/automl/automl-databricks-local-with-deployment.ipynb
+++ b/how-to-use-azureml/azure-databricks/automl/automl-databricks-local-with-deployment.ipynb
@@ -49,7 +49,7 @@
"metadata": {},
"source": [
"## Register Machine Learning Services Resource Provider\n",
- "Microsoft.MachineLearningServices only needs to be registed once in the subscription. To register it:\n",
+ "Microsoft.MachineLearningServices only needs to be registered once in the subscription. To register it:\n",
"Start the Azure portal.\n",
"Select your All services and then Subscription.\n",
"Select the subscription that you want to use.\n",
diff --git a/how-to-use-azureml/azure-synapse/spark_session_on_synapse_spark_pool.ipynb b/how-to-use-azureml/azure-synapse/spark_session_on_synapse_spark_pool.ipynb
index 1eebb8a45..ff09781d1 100644
--- a/how-to-use-azureml/azure-synapse/spark_session_on_synapse_spark_pool.ipynb
+++ b/how-to-use-azureml/azure-synapse/spark_session_on_synapse_spark_pool.ipynb
@@ -109,7 +109,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# use Synapse compute linked to the Compute Instance's workspace with an aml envrionment.\n",
+ "# use Synapse compute linked to the Compute Instance's workspace with an aml environment.\n",
"# conda dependencies specified in the environment will be installed before the spark session started.\n",
"\n",
"%synapse start -c $synapse_compute_name -e AzureML-Minimal"
diff --git a/how-to-use-azureml/deployment/deploy-multi-model/multi-model-register-and-deploy.ipynb b/how-to-use-azureml/deployment/deploy-multi-model/multi-model-register-and-deploy.ipynb
index 6a8b15146..7baf17065 100644
--- a/how-to-use-azureml/deployment/deploy-multi-model/multi-model-register-and-deploy.ipynb
+++ b/how-to-use-azureml/deployment/deploy-multi-model/multi-model-register-and-deploy.ipynb
@@ -224,7 +224,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "You can now create and/or use an Environment object when deploying a Webservice. The Environment can have been previously registered with your Workspace, or it will be registered with it as a part of the Webservice deployment. Please note that your environment must include azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.\n",
+ "You can now create and/or use an Environment object when deploying a Webservice. The Environment can have been previously registered with your Workspace, or it will be registered with it as a part of the Webservice deployment. Please note that your environment must include azureml-defaults with version >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service.\n",
"\n",
"More information can be found in our [using environments notebook](../training/using-environments/using-environments.ipynb)."
]
diff --git a/how-to-use-azureml/deployment/deploy-with-controlled-rollout/deploy-aks-with-controlled-rollout.ipynb b/how-to-use-azureml/deployment/deploy-with-controlled-rollout/deploy-aks-with-controlled-rollout.ipynb
index 5b1e47637..c0851833f 100644
--- a/how-to-use-azureml/deployment/deploy-with-controlled-rollout/deploy-aks-with-controlled-rollout.ipynb
+++ b/how-to-use-azureml/deployment/deploy-with-controlled-rollout/deploy-aks-with-controlled-rollout.ipynb
@@ -21,7 +21,7 @@
"metadata": {},
"source": [
"# Deploy models to Azure Kubernetes Service (AKS) using controlled roll out\n",
- "This notebook will show you how to deploy mulitple AKS webservices with the same scoring endpoint and how to roll out your models in a controlled manner by configuring % of scoring traffic going to each webservice. If you are using a Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) to install the Azure Machine Learning Python SDK and create an Azure ML Workspace."
+ "This notebook will show you how to deploy multiple AKS webservices with the same scoring endpoint and how to roll out your models in a controlled manner by configuring % of scoring traffic going to each webservice. If you are using a Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) to install the Azure Machine Learning Python SDK and create an Azure ML Workspace."
]
},
{
@@ -271,7 +271,7 @@
"metadata": {},
"source": [
"## Test the web service using run method\n",
- "Test the web sevice by passing in data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
+ "Test the web service by passing in data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
]
},
{
diff --git a/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb b/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb
index 466da2385..301fdc035 100644
--- a/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb
+++ b/how-to-use-azureml/deployment/enable-app-insights-in-production-service/enable-app-insights-in-production-service.ipynb
@@ -161,7 +161,7 @@
"metadata": {},
"source": [
"## 5. *Create myenv.yml file*\n",
- "Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
+ "Please note that you must indicate azureml-defaults with version >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
diff --git a/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb b/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb
index bcf507019..b4499b895 100644
--- a/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb
+++ b/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb
@@ -249,7 +249,7 @@
"metadata": {},
"source": [
"### Setting up inference configuration\n",
- "First we create a YAML file that specifies which dependencies we would like to see in our container. Please note that you must include azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
+ "First we create a YAML file that specifies which dependencies we would like to see in our container. Please note that you must include azureml-defaults with version >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
diff --git a/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb b/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb
index 3b37c55ec..fbe2a8de6 100644
--- a/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb
+++ b/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb
@@ -44,7 +44,7 @@
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, please follow [Azure ML configuration notebook](../../../configuration.ipynb) to set up your environment.\n",
"\n",
"### 2. Install additional packages needed for this Notebook\n",
- "You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n",
+ "You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Machine Learning SDK is installed.\n",
"\n",
"```sh\n",
"(myenv) $ pip install matplotlib onnx opencv-python\n",
@@ -320,7 +320,7 @@
"metadata": {},
"source": [
"### Write Environment File\n",
- "Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
+ "Please note that you must indicate azureml-defaults with version >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
diff --git a/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb b/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb
index 7d481129a..b0e6472fc 100644
--- a/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb
+++ b/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb
@@ -44,7 +44,7 @@
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, please follow [Azure ML configuration notebook](../../../configuration.ipynb) to set up your environment.\n",
"\n",
"### 2. Install additional packages needed for this tutorial notebook\n",
- "You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed. \n",
+ "You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Machine Learning SDK is installed. \n",
"\n",
"```sh\n",
"(myenv) $ pip install matplotlib onnx opencv-python\n",
@@ -306,7 +306,7 @@
"source": [
"### Write Environment File\n",
"\n",
- "This step creates a YAML environment file that specifies which dependencies we would like to see in our Linux Virtual Machine. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
+ "This step creates a YAML environment file that specifies which dependencies we would like to see in our Linux Virtual Machine. Please note that you must indicate azureml-defaults with version >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
diff --git a/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb b/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
index fb408032d..8cc65bc2b 100644
--- a/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
+++ b/how-to-use-azureml/deployment/onnx/onnx-modelzoo-aml-deploy-resnet50.ipynb
@@ -252,7 +252,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Create the inference configuration object. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
+ "Create the inference configuration object. Please note that you must indicate azureml-defaults with version >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
diff --git a/how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.ipynb b/how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.ipynb
index 92d8ef5ef..97aab2f70 100644
--- a/how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.ipynb
+++ b/how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.ipynb
@@ -405,7 +405,7 @@
"metadata": {},
"source": [
"### Create inference configuration\n",
- "First we create a YAML file that specifies which dependencies we would like to see in our container. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
+ "First we create a YAML file that specifies which dependencies we would like to see in our container. Please note that you must indicate azureml-defaults with version >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
diff --git a/how-to-use-azureml/deployment/production-deploy-to-aks-gpu/production-deploy-to-aks-gpu.ipynb b/how-to-use-azureml/deployment/production-deploy-to-aks-gpu/production-deploy-to-aks-gpu.ipynb
index 838aa9966..f4f6abedb 100644
--- a/how-to-use-azureml/deployment/production-deploy-to-aks-gpu/production-deploy-to-aks-gpu.ipynb
+++ b/how-to-use-azureml/deployment/production-deploy-to-aks-gpu/production-deploy-to-aks-gpu.ipynb
@@ -276,7 +276,7 @@
"metadata": {},
"source": [
"# Test the web service\n",
- "We test the web sevice by passing the test images content."
+ "We test the web service by passing the test images content."
]
},
{
diff --git a/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks-ssl.ipynb b/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks-ssl.ipynb
index baec64843..cda3049e6 100644
--- a/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks-ssl.ipynb
+++ b/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks-ssl.ipynb
@@ -289,7 +289,7 @@
"metadata": {},
"source": [
"# Test the web service using run method\n",
- "We test the web sevice by passing data.\n",
+ "We test the web service by passing data.\n",
"Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
]
},
diff --git a/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb b/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb
index 59270d81a..ae93997a3 100644
--- a/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb
+++ b/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb
@@ -500,7 +500,7 @@
"metadata": {},
"source": [
"# Test the web service using run method\n",
- "We test the web sevice by passing data.\n",
+ "We test the web service by passing data.\n",
"Run() method retrieves API keys behind the scenes to make sure that call is authenticated."
]
},
diff --git a/how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-locally-and-deploy.ipynb b/how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-locally-and-deploy.ipynb
index aa4923088..afaa984d5 100644
--- a/how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-locally-and-deploy.ipynb
+++ b/how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-locally-and-deploy.ipynb
@@ -314,7 +314,7 @@
"\n",
"Deploy Model and ScoringExplainer.\n",
"\n",
- "Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
+ "Please note that you must indicate azureml-defaults with version >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
diff --git a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-data-transfer.ipynb b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-data-transfer.ipynb
index 8221f22de..bb1abff30 100644
--- a/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-data-transfer.ipynb
+++ b/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-data-transfer.ipynb
@@ -577,7 +577,7 @@
"framework": [
"Azure ML"
],
- "friendly_name": "Azure Machine Learning Pipeline with DataTranferStep",
+ "friendly_name": "Azure Machine Learning Pipeline with DataTransferStep",
"kernelspec": {
"display_name": "Python 3.6",
"language": "python",
@@ -602,7 +602,7 @@
"tags": [
"None"
],
- "task": "Demonstrates the use of DataTranferStep"
+ "task": "Demonstrates the use of DataTransferStep"
},
"nbformat": 4,
"nbformat_minor": 2
diff --git a/how-to-use-azureml/ml-frameworks/chainer/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.ipynb b/how-to-use-azureml/ml-frameworks/chainer/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.ipynb
index 835d37e41..05ffe9424 100644
--- a/how-to-use-azureml/ml-frameworks/chainer/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.ipynb
+++ b/how-to-use-azureml/ml-frameworks/chainer/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.ipynb
@@ -540,7 +540,7 @@
"metadata": {},
"source": [
"### Create myenv.yml\n",
- "We also need to create an environment file so that Azure Machine Learning can install the necessary packages in the Docker image which are required by your scoring script. In this case, we need to specify conda package `numpy` and pip install `chainer`. Please note that you must indicate azureml-defaults with verion >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
+ "We also need to create an environment file so that Azure Machine Learning can install the necessary packages in the Docker image which are required by your scoring script. In this case, we need to specify conda package `numpy` and pip install `chainer`. Please note that you must indicate azureml-defaults with version >= 1.0.45 as a pip dependency, because it contains the functionality needed to host the model as a web service."
]
},
{
diff --git a/how-to-use-azureml/responsible-ai/visualize-upload-loan-decision/rai-loan-decision.ipynb b/how-to-use-azureml/responsible-ai/visualize-upload-loan-decision/rai-loan-decision.ipynb
index bc4741714..a8703b149 100644
--- a/how-to-use-azureml/responsible-ai/visualize-upload-loan-decision/rai-loan-decision.ipynb
+++ b/how-to-use-azureml/responsible-ai/visualize-upload-loan-decision/rai-loan-decision.ipynb
@@ -350,7 +350,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Looking at the disparity in accuracy, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.\n",
+ "Looking at the disparity in accuracy, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunity - males are offered loans at three times the rate of females.\n",
"\n",
"Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact."
]
diff --git a/how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb b/how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb
index 7ab49ed0d..8ea99eab7 100644
--- a/how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb
+++ b/how-to-use-azureml/track-and-monitor-experiments/logging-api/logging-api.ipynb
@@ -203,7 +203,7 @@
"metadata": {},
"source": [
"### Viewing an experiment in the portal\n",
- "You can also view an experiement similarly by typing `experiment`. The portal link will take you to the experiment's Run History page that shows all runs and allows you to analyze trends across multiple runs."
+ "You can also view an experiment similarly by typing `experiment`. The portal link will take you to the experiment's Run History page that shows all runs and allows you to analyze trends across multiple runs."
]
},
{
@@ -227,7 +227,7 @@
"* The Snapshot page contains a snapshot of the directory specified in the ''start_logging'' statement, plus the notebook at the time of the ''start_logging'' call. This snapshot and notebook can be downloaded from the Run Details page to continue or reproduce an experiment.\n",
"\n",
"### Logging string metrics\n",
- "The following cell logs a string metric. A string metric is simply a string value associated with a name. A string metric String metrics are useful for labelling runs and to organize your data. Typically you should log all string parameters as metrics for later analysis - even information such as paths can help to understand how individual experiements perform differently.\n",
+ "The following cell logs a string metric. A string metric is simply a string value associated with a name. A string metric String metrics are useful for labelling runs and to organize your data. Typically you should log all string parameters as metrics for later analysis - even information such as paths can help to understand how individual experiments perform differently.\n",
"\n",
"String metrics can be used in the following ways:\n",
"* Plot in hitograms\n",
diff --git a/how-to-use-azureml/training/train-in-spark/train-in-spark.ipynb b/how-to-use-azureml/training/train-in-spark/train-in-spark.ipynb
index a6c4ae27d..30c408d56 100644
--- a/how-to-use-azureml/training/train-in-spark/train-in-spark.ipynb
+++ b/how-to-use-azureml/training/train-in-spark/train-in-spark.ipynb
@@ -318,7 +318,7 @@
"tags": [
"None"
],
- "task": "Submiting a run on a spark cluster"
+ "task": "Submitting a run on a spark cluster"
},
"nbformat": 4,
"nbformat_minor": 2
diff --git a/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb b/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb
index 946b2d9c4..8c91f1c4f 100644
--- a/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb
+++ b/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb
@@ -93,7 +93,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Set up Configuraton and Create Azure ML Workspace\n",
+ "## Set up Configuration and Create Azure ML Workspace\n",
"\n",
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) first if you haven't already to establish your connection to the Azure ML Workspace."
]
@@ -548,7 +548,7 @@
"framework": [
"Azure ML"
],
- "friendly_name": "Filtering data using Tabular Timeseiries Dataset related API",
+ "friendly_name": "Filtering data using Tabular Timeseries Dataset related API",
"index_order": 1,
"kernelspec": {
"display_name": "Python 3.6",
diff --git a/index.md b/index.md
index 9bfec903f..e66eef557 100644
--- a/index.md
+++ b/index.md
@@ -21,7 +21,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| [Register a model and deploy locally](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/deployment/deploy-to-local/register-model-deploy-local.ipynb) | Deployment | None | Local | Local | None | None |
| :star:[Data drift quickdemo](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/work-with-data/datadrift-tutorial/datadrift-tutorial.ipynb) | Filtering | NOAA | Remote | None | Azure ML | Dataset, Timeseries, Drift |
| :star:[Datasets with ML Pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.ipynb) | Train | Fashion MNIST | Remote | None | Azure ML | Dataset, Pipeline, Estimator, ScriptRun |
-| :star:[Filtering data using Tabular Timeseiries Dataset related API](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb) | Filtering | NOAA | Local | None | Azure ML | Dataset, Tabular Timeseries |
+| :star:[Filtering data using Tabular Timeseries Dataset related API](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb) | Filtering | NOAA | Local | None | Azure ML | Dataset, Tabular Timeseries |
| :star:[Train with Datasets (Tabular and File)](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/work-with-data/datasets-tutorial/train-with-datasets/train-with-datasets.ipynb) | Train | Iris, Diabetes | Remote | None | Azure ML | Dataset, Estimator, ScriptRun |
| [Forecasting away from training data](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.ipynb) | Forecasting | None | Remote | None | Azure ML AutoML | Forecasting, Confidence Intervals |
| [Automated ML run with basic edition features.](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb) | Classification | Bankmarketing | AML | ACI | None | featurization, explainability, remote_run, AutomatedML |
@@ -29,7 +29,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| [Classification of credit card fraudulent transactions using Automated ML](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/experimental/classification-credit-card-fraud-local-managed/auto-ml-classification-credit-card-fraud-local-managed.ipynb) | Classification | Creditcard | AML Compute | None | None | AutomatedML |
| [Automated ML run with featurization and model explainability.](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb) | Regression | MachineData | AML | ACI | None | featurization, explainability, remote_run, AutomatedML |
| [auto-ml-forecasting-backtest-single-model](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb) | | None | Remote | None | Azure ML AutoML | |
-| :star:[Azure Machine Learning Pipeline with DataTranferStep](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-data-transfer.ipynb) | Demonstrates the use of DataTranferStep | Custom | ADF | None | Azure ML | None |
+| :star:[Azure Machine Learning Pipeline with DataTransferStep](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-data-transfer.ipynb) | Demonstrates the use of DataTransferStep | Custom | ADF | None | Azure ML | None |
| [Getting Started with Azure Machine Learning Pipelines](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-getting-started.ipynb) | Getting Started notebook for ANML Pipelines | Custom | AML Compute | None | Azure ML | None |
| [Azure Machine Learning Pipeline with AzureBatchStep](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-azurebatch-to-run-a-windows-executable.ipynb) | Demonstrates the use of AzureBatchStep | Custom | Azure Batch | None | Azure ML | None |
| :star:[How to use ModuleStep with AML Pipelines](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-modulestep.ipynb) | Demonstrates the use of ModuleStep | Custom | AML Compute | None | Azure ML | None |
@@ -72,7 +72,7 @@ Machine Learning notebook samples and encourage efficient retrieval of topics an
| [Training and hyperparameter tuning using the TensorFlow estimator](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/ml-frameworks/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb) | Train a deep neural network | MNIST | AML Compute | Azure Container Instance | TensorFlow | None |
| [Resuming a model](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/ml-frameworks/tensorflow/train-tensorflow-resume-training/train-tensorflow-resume-training.ipynb) | Resume a model in TensorFlow from a previously submitted run | MNIST | AML Compute | None | TensorFlow | None |
| [Using Tensorboard](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/track-and-monitor-experiments/tensorboard/export-run-history-to-tensorboard/export-run-history-to-tensorboard.ipynb) | Export the run history as Tensorboard logs | None | None | None | TensorFlow | None |
-| [Training in Spark](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-in-spark/train-in-spark.ipynb) | Submiting a run on a spark cluster | None | HDI cluster | None | PySpark | None |
+| [Training in Spark](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-in-spark/train-in-spark.ipynb) | Submitting a run on a spark cluster | None | HDI cluster | None | PySpark | None |
| [Train on Azure Machine Learning Compute](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-on-amlcompute/train-on-amlcompute.ipynb) | Submit a run on Azure Machine Learning Compute. | Diabetes | AML Compute | None | None | None |
| [Train on local compute](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-on-local/train-on-local.ipynb) | Train a model locally | Diabetes | Local | None | None | None |
| [Train in a remote Linux virtual machine](https://github.com/Azure/MachineLearningNotebooks/blob/master//how-to-use-azureml/training/train-on-remote-vm/train-on-remote-vm.ipynb) | Configure and execute a run | Diabetes | Data Science Virtual Machine | None | None | None |
diff --git a/setup-environment/NBSETUP.md b/setup-environment/NBSETUP.md
index b3c683b30..3754f6da4 100644
--- a/setup-environment/NBSETUP.md
+++ b/setup-environment/NBSETUP.md
@@ -20,7 +20,7 @@ We recommend you create a Python virtual environment ([Miniconda](https://conda.
# install just the base SDK
pip install azureml-sdk
-# clone the sample repoistory
+# clone the sample repository
git clone https://github.com/Azure/MachineLearningNotebooks.git
# below steps are optional
@@ -57,10 +57,10 @@ Please make sure you start with the [Configuration](configuration.ipynb) noteboo
You need to have Docker engine installed locally and running. Open a command line window and type the following command.
-__Note:__ We use version `1.0.10` below as an exmaple, but you can replace that with any available version number you like.
+__Note:__ We use version `1.0.10` below as an example, but you can replace that with any available version number you like.
```sh
-# clone the sample repoistory
+# clone the sample repository
git clone https://github.com/Azure/MachineLearningNotebooks.git
# change current directory to the folder
diff --git a/tutorials/compute-instance-quickstarts/quickstart-azureml-in-10mins/quickstart-azureml-in-10mins.ipynb b/tutorials/compute-instance-quickstarts/quickstart-azureml-in-10mins/quickstart-azureml-in-10mins.ipynb
index 8dacd74e3..04329b372 100644
--- a/tutorials/compute-instance-quickstarts/quickstart-azureml-in-10mins/quickstart-azureml-in-10mins.ipynb
+++ b/tutorials/compute-instance-quickstarts/quickstart-azureml-in-10mins/quickstart-azureml-in-10mins.ipynb
@@ -126,7 +126,7 @@
").reshape(-1)\n",
"\n",
"\n",
- "# now let's show some randomly chosen images from the traininng set.\n",
+ "# now let's show some randomly chosen images from the training set.\n",
"count = 0\n",
"sample_size = 30\n",
"plt.figure(figsize=(16, 6))\n",
diff --git a/tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb b/tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb
index 62e0ec1ae..9f59f6b25 100644
--- a/tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb
+++ b/tutorials/image-classification-mnist-data/img-classification-part1-training.ipynb
@@ -104,7 +104,7 @@
"source": [
"### Create experiment\n",
"\n",
- "Create an experiment to track the runs in your workspace. A workspace can have muliple experiments. "
+ "Create an experiment to track the runs in your workspace. A workspace can have multiple experiments. "
]
},
{
@@ -197,7 +197,7 @@
"\n",
"### Download the MNIST dataset\n",
"\n",
- "Use Azure Open Datasets to get the raw MNIST data files. [Azure Open Datasets](https://docs.microsoft.com/azure/open-datasets/overview-what-are-open-datasets) are curated public datasets that you can use to add scenario-specific features to machine learning solutions for more accurate models. Each dataset has a corrseponding class, `MNIST` in this case, to retrieve the data in different ways.\n",
+ "Use Azure Open Datasets to get the raw MNIST data files. [Azure Open Datasets](https://docs.microsoft.com/azure/open-datasets/overview-what-are-open-datasets) are curated public datasets that you can use to add scenario-specific features to machine learning solutions for more accurate models. Each dataset has a corresponding class, `MNIST` in this case, to retrieve the data in different ways.\n",
"\n",
"This code retrieves the data as a `FileDataset` object, which is a subclass of `Dataset`. A `FileDataset` references single or multiple files of any format in your datastores or public urls. The class provides you with the ability to download or mount the files to your compute by creating a reference to the data source location. Additionally, you register the Dataset to your workspace for easy retrieval during training.\n",
"\n",
@@ -252,7 +252,7 @@
"y_test = load_data(glob.glob(os.path.join(data_folder,\"**/t10k-labels-idx1-ubyte.gz\"), recursive=True)[0], True).reshape(-1)\n",
"\n",
"\n",
- "# now let's show some randomly chosen images from the traininng set.\n",
+ "# now let's show some randomly chosen images from the training set.\n",
"count = 0\n",
"sample_size = 30\n",
"plt.figure(figsize = (16, 6))\n",
diff --git a/tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb b/tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb
index 2eae3a402..52ac902ab 100644
--- a/tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb
+++ b/tutorials/image-classification-mnist-data/img-classification-part2-deploy.ipynb
@@ -198,7 +198,7 @@
"1. Create environment object containing dependencies needed by the model using the environment file (`myenv.yml`)\n",
"1. Create inference configuration necessary to deploy the model as a web service using:\n",
" * The scoring file (`score.py`)\n",
- " * envrionment object created in previous step\n",
+ " * environment object created in previous step\n",
"1. Deploy the model to the ACI container.\n",
"1. Get the web service HTTP endpoint."
]
diff --git a/tutorials/image-classification-mnist-data/img-classification-part3-deploy-encrypted.ipynb b/tutorials/image-classification-mnist-data/img-classification-part3-deploy-encrypted.ipynb
index 18c6c209a..0eeceaefe 100644
--- a/tutorials/image-classification-mnist-data/img-classification-part3-deploy-encrypted.ipynb
+++ b/tutorials/image-classification-mnist-data/img-classification-part3-deploy-encrypted.ipynb
@@ -129,7 +129,7 @@
"source": [
"#### Install Homomorphic Encryption based library for Secure Inferencing\n",
"\n",
- "Our library is based on [Microsoft SEAL](https://github.com/Microsoft/SEAL) and pubished to [PyPi.org](https://pypi.org/project/encrypted-inference) as an easy to use package "
+ "Our library is based on [Microsoft SEAL](https://github.com/Microsoft/SEAL) and published to [PyPi.org](https://pypi.org/project/encrypted-inference) as an easy to use package "
]
},
{
@@ -253,7 +253,7 @@
"1. Create environment object containing dependencies needed by the model using the environment file (`myenv.yml`)\n",
"1. Create inference configuration necessary to deploy the model as a web service using:\n",
" * The scoring file (`score_encrypted.py`)\n",
- " * envrionment object created in previous step\n",
+ " * environment object created in previous step\n",
"1. Deploy the model to the ACI container.\n",
"1. Get the web service HTTP endpoint."
]