You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+32-13
Original file line number
Diff line number
Diff line change
@@ -1,23 +1,42 @@
1
1
# reco
2
2
3
-
`reco` is an R package which implements several algrithms for matrix factorization targeting recommender systems.
3
+
`reco` is an R package which implements many algorithms for **sparse matrix factorizations**. Focus is on applications for **recommender systems**.
4
4
5
-
1. Weighted Regularized Matrix Factorization (WRMF) from [Collaborative Filtering for Implicit Feedback Datasets](http://yifanhu.net/PUB/cf.pdf) (by Yifan Hu, Yehuda Koren, Chris Volinsky). One of the most efficient (benchmarks below) solvers.
6
-
1. Linear-Flow from [Practical Linear Models for Large-Scale One-Class Collaborative Filtering](http://www.bkveton.com/docs/ijcai2016.pdf). This algorithm is similar to [SLIM](http://glaros.dtc.umn.edu/gkhome/node/774) but looks for factorized low-rank item-item similarity matrix.
1. Vanilla **Maximum Margin Matrix Factorization** - classic approch for "rating" prediction. See `WRMF` class and constructor option `feedback = "explicit"`. Original paper which indroduced MMMF could be found [here](http://ttic.uchicago.edu/~nati/Publications/MMMFnips04.pdf).
8
+
* <imgsrc="docs/img/MMMF.png"width="400">
9
+
1.**Weighted Regularized Matrix Factorization (WRMF)** from [Collaborative Filtering for Implicit Feedback Datasets](http://yifanhu.net/PUB/cf.pdf). See `WRMF` class and constructor option `feedback = "implicit"`.
10
+
We provide 2 solvers:
11
+
1. Exact based of Cholesky Factorization
12
+
1. Approximated based on fixed number of steps of **Conjugate Gradient**.
13
+
See details in [Applications of the Conjugate Gradient Method for Implicit Feedback Collaborative Filtering](https://pdfs.semanticscholar.org/bfdf/7af6cf7fd7bb5e6b6db5bbd91be11597eaf0.pdf) and [Faster Implicit Matrix Factorization](www.benfrederickson.com/fast-implicit-matrix-factorization/).
14
+
* <imgsrc="docs/img/WRMF.png"width="400">
15
+
1.**Linear-Flow** from [Practical Linear Models for Large-Scale One-Class Collaborative Filtering](http://www.bkveton.com/docs/ijcai2016.pdf). Algorithm looks for factorized low-rank item-item similarity matrix (in some sense it is similar to [SLIM](http://glaros.dtc.umn.edu/gkhome/node/774))
16
+
* <imgsrc="docs/img/LinearFlow.png"width="300">
17
+
1.**Soft-SVD** via fast Alternating Least Squares as described in [Matrix Completion and Low-Rank SVD via Fast Alternating Least Squares](https://arxiv.org/pdf/1410.2596.pdf).
18
+
* <imgsrc="docs/img/soft-svd.png"width="600">
19
+
1.**Soft-Impute** via fast Alternating Least Squares as described in [Matrix Completion and Low-Rank SVD via Fast Alternating Least Squares](https://arxiv.org/pdf/1410.2596.pdf).
20
+
* <imgsrc="docs/img/soft-impute.png"width="400">
21
+
* with a solution in SVD form <imgsrc="docs/img/soft-impute-svd-form.png"width="150">
10
22
11
-
* Built on top of `RcppArmadillo`
12
-
* extensively use **BLAS** and parallelized with **OpenMP**
13
-
* implements **Conjugate Gradient solver** as dicribed in [Applications of the Conjugate Gradient Method for Implicit
14
-
Feedback Collaborative Filtering](https://pdfs.semanticscholar.org/bfdf/7af6cf7fd7bb5e6b6db5bbd91be11597eaf0.pdf) and [Faster Implicit Matrix Factorization](www.benfrederickson.com/fast-implicit-matrix-factorization/)
15
-
* Top-k items inference is `O(n*log(k))` and use **BLAS** + **OpenMP**
**Note that syntax could be not up to date since package is under active development**
38
+
39
+
1.[Slides from DataFest Tbilisi(2017-11-16)](https://www.slideshare.net/DmitriySelivanov/matrix-factorizations-for-recommender-systems)
21
40
1.[Introduction to matrix factorization with Weighted-ALS algorithm](http://dsnotes.com/post/2017-05-28-matrix-factorization-for-recommender-systems/) - collaborative filtering for implicit feedback datasets.
22
41
1.[Music recommendations using LastFM-360K dataset](http://dsnotes.com/post/2017-06-28-matrix-factorization-for-recommender-systems-part-2/)
We follow [mlapi](https://github.com/dselivanov/mlapi) conventions.
34
53
35
-
# Notes on multithreading and BLAS
54
+
###Notes on multithreading and BLAS
36
55
37
56
**VERY IMPORTANT** if you use multithreaded BLAS (you generally should) such as OpenBLAS, Intel MKL, Apple Accelerate, I **highly recommend disable its internal multithreading ability**. This leads to **substantial speedups** for this package (can be easily 10x and more). Matrix factorization is already parallelized in package with OpenMP. This can be done by setting corresponding environment variables **before starting `R`**:
0 commit comments