Classification examples with SLIM¶
[2]:
import skmine
import pandas as pd
import numpy as np
print("This tutorial was tested with the following version of skmine :", skmine.__version__)
This tutorial was tested with the following version of skmine : 1.0.0
MDL based algorithms encode data according to a given codetable
When calling .fit
, we iteratively look for the codetable that best compress the training data
When we are done with training our model, we can benefit from the refined codetable to make some predictions
SLIM Classifier for binary and multiclass classification (k>=2)¶
An integrated classifier in scikit mine is available and allows to solve binary and multiclass problems. It uses the SLIM compression algorithm.
To use it, we need to have discretized dataset. Let’s take for example the discretized iris dataset with 3 classes.
[2]:
from skmine.datasets.fimi import fetch_iris
X, y = fetch_iris(return_y=True) # without return_y=True, the method would have returned the whole dataset in one variable
label_names = ['setosa', 'versicolor', 'virginica']
print("-> Data:\n", X)
print("-> Unique label :", np.unique(y))
-> Data:
0 [2, 9, 12, 15]
1 [1, 10, 11, 14]
2 [5, 10, 13, 16]
3 [2, 6, 12, 15]
4 [1, 8, 11, 14]
...
145 [3, 9, 13, 16]
146 [1, 10, 11, 14]
147 [3, 8, 12, 15]
148 [5, 9, 13, 16]
149 [5, 10, 13, 16]
Name: iris.D19.N150.C3.num, Length: 150, dtype: object
-> Unique label : [17 18 19]
Note that in the discretized iris dataset, each features is discretized with different labels :
[3]:
import numpy as np
X_2d = np.array(X.to_list())
for k in range(X_2d.shape[-1]):
print(f"unique items in colunms {k} : {np.unique(X_2d[:,k])}")
unique items in colunms 0 : [1 2 3 4 5]
unique items in colunms 1 : [ 6 7 8 9 10]
unique items in colunms 2 : [11 12 13]
unique items in colunms 3 : [14 15 16]
The purpose of this dataset is to predict the last column of db from the other 4. The possible targets are: 17, 18, 19. We can prepare our train and test data set.
[4]:
from sklearn.model_selection import train_test_split
(X_train, X_test, y_train, y_test) = train_test_split(X, y, random_state=1, test_size=0.2, shuffle=True)
print("X_train shape:", X_train.shape, "y_train shape:", y_train.shape)
print("X_test shape:", X_test.shape, "y_test shape:", y_test.shape)
X_train shape: (120,) y_train shape: (120,)
X_test shape: (30,) y_test shape: (30,)
Now we can use our SlimClassifier.
[5]:
from skmine.itemsets.slim_classifier import SlimClassifier
# You can pass in parameter of your classifier the set of your items.
# This will improve its performance especially on small data sets like iris.
items = set(item for transaction in X for item in transaction)
print("items", items)
clf = SlimClassifier(items=items) # You can also enable or disable the pruning of SLIM compressors via the `pruning` parameter
clf.fit(X_train, y_train)
items {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}
[5]:
SlimClassifier(items={1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16})In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
SlimClassifier(items={1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16})
You can use many functions of sckit learn that are compatible with classifiers. For example, build a confusion matrix, use GridSearchCV or cross validation.
Confusion matrix
[6]:
from sklearn.metrics import confusion_matrix
y_pred = clf.predict(X_test)
print(f"-> Accuracy : {round(clf.score(X_test, y_test)*100,1)} %")
print("-> Confusion matrix :\n", pd.DataFrame(data=confusion_matrix(y_test, y_pred),columns=label_names, index=label_names))
-> Accuracy : 83.3 %
-> Confusion matrix :
setosa versicolor virginica
setosa 13 1 0
versicolor 0 8 1
virginica 0 3 4
GridSearchCV (this method allows us to test many parameters for a classifier and to retain the best combination)
[7]:
from sklearn.model_selection import GridSearchCV
parameters = {'pruning': [False, True], 'items': [None, items]}
grid = GridSearchCV(clf, parameters)
grid.fit(X_train,y_train)
print("-> Best params :", grid.best_params_)
print(f"-> Accuracy : {round(grid.score(X_train, y_train)*100,1)} %")
-> Best params : {'items': {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, 'pruning': False}
-> Accuracy : 98.3 %
With GridSearchCV we get with the best parameters an accuracy of more than 98%, much better than the previous score. With this combination, the item list is passed as a parameter and pruning is disabled. Since pruning does not improve the compression of codetables in the SLIM algorithm on iris, it does not matter whether it is enabled or not.
To reduce overfitting, we can use the cross validation of sklearn. - Cross validation
[8]:
from sklearn.model_selection import cross_val_score
cross_validation = cross_val_score(clf, X, y, cv=10)
print(f"-> 10 Cross validation: {cross_validation.round(2)}")
print(f"-> Mean Accuracy : {round(cross_validation.mean()*100,1)} %")
-> 10 Cross validation: [0.93 0.93 0.87 0.93 0.93 0.93 1. 1. 1. 0.93]
-> Mean Accuracy : 94.7 %
After cross validation, we see that the accuracy is almost 95% on average. So in 95% of the cases, the right type of flower is given.
SLIM classifier from numerical dataset¶
Preprocessing¶
Load iris dataset from scikit-learn which is not discretized:
[9]:
from sklearn.datasets import load_iris
data = load_iris()
X, y = data.data, data.target
print("-> X.shape : ", X.shape)
print("-> X, 5 rows :\n", X[0:5])
print("-> Unique labels : ", np.unique(y))
-> X.shape : (150, 4)
-> X, 5 rows :
[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]]
-> Unique labels : [0 1 2]
Classic standardisation
[10]:
from sklearn.preprocessing import StandardScaler
Xst = StandardScaler().fit_transform(X)
print("-> Xst.shape : ", Xst.shape)
print("-> Xst, 5 rows :\n", Xst[0:5].round(3))
-> Xst.shape : (150, 4)
-> Xst, 5 rows :
[[-0.901 1.019 -1.34 -1.315]
[-1.143 -0.132 -1.34 -1.315]
[-1.385 0.328 -1.397 -1.315]
[-1.507 0.098 -1.283 -1.315]
[-1.022 1.249 -1.34 -1.315]]
KBins discretisation
[11]:
from sklearn.preprocessing import KBinsDiscretizer
Xt = KBinsDiscretizer(n_bins=3, encode='ordinal', strategy='uniform').fit_transform(Xst).astype(int)
# n_bins=3 : for each column we want to have 3 categorical values
print("-> Xt.shape : ", Xt.shape)
print("-> Xt, 5 rows :\n", Xt[50:55])
# In the output, each column is discretized in 3 values : 0, 1 and 2.
-> Xt.shape : (150, 4)
-> Xt, 5 rows :
[[2 1 1 1]
[1 1 1 1]
[2 1 1 1]
[1 0 1 1]
[1 0 1 1]]
Note that in this discretization of iris dataset, each feature is discretized with the same labels, which is not what we want
[12]:
for k in range(4):
print(f"unique items in colunms {k} : {np.unique(Xt[:,k])}")
unique items in colunms 0 : [0 1 2]
unique items in colunms 1 : [0 1 2]
unique items in colunms 2 : [0 1 2]
unique items in colunms 3 : [0 1 2]
We must shift values in columns in order to avoid identical labels between columns.
[13]:
shift_col = np.max(Xt, axis=0)
for k in range(1, len(shift_col)) :
shift_col[k]+= shift_col[k-1] + 1
shift_col+=-shift_col[0]
for k in range(len(shift_col)) :
Xt[:,k]+=shift_col[k]
for k in range(4):
print(f"unique items in colunms {k} : {np.unique(Xt[:,k])}")
Xt = pd.Series( Xt.tolist() ) # we must tranform the array into series of list
print("-> Xt.shape : ", Xt.shape)
print("-> Xt, 10 rows :\n", Xt[50:55])
unique items in colunms 0 : [0 1 2]
unique items in colunms 1 : [3 4 5]
unique items in colunms 2 : [6 7 8]
unique items in colunms 3 : [ 9 10 11]
-> Xt.shape : (150,)
-> Xt, 10 rows :
50 [2, 4, 7, 10]
51 [1, 4, 7, 10]
52 [2, 4, 7, 10]
53 [1, 3, 7, 10]
54 [1, 3, 7, 10]
dtype: object
In pipelines :¶
[14]:
from skmine.itemsets.slim_classifier import SlimClassifier
from sklearn.datasets import load_iris
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, KBinsDiscretizer
class MultiLabelsKbins(KBinsDiscretizer): # pandas DataFrames are easier to read ;)
def transform(self, X):
Xt = super().transform(X).astype(int)
shift_col = np.max(Xt, axis=0)
for k in range(1, len(shift_col)) :
shift_col[k]+= shift_col[k-1] + 1
shift_col+=-shift_col[0]
for k in range(len(shift_col)) :
Xt[:,k]+=shift_col[k]
return pd.Series(Xt.tolist())
data = load_iris()
X, y = data.data, data.target
(X_train, X_test, y_train, y_test) = train_test_split(X, y, random_state=1, test_size=0.2, shuffle=True)
print("X_train shape:", X_train.shape, "y_train shape:", y_train.shape)
print("X_test shape:", X_test.shape, "y_test shape:", y_test.shape)
X_train shape: (120, 4) y_train shape: (120,)
X_test shape: (30, 4) y_test shape: (30,)
[15]:
preproc = Pipeline([
('StandardScaler', StandardScaler()),
('MultiLabelsKbins', MultiLabelsKbins(n_bins=3, encode='ordinal', strategy='uniform')),
])
preproc.fit(X_train)
[15]:
Pipeline(steps=[('StandardScaler', StandardScaler()), ('MultiLabelsKbins', MultiLabelsKbins(encode='ordinal', n_bins=3, strategy='uniform'))])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Pipeline(steps=[('StandardScaler', StandardScaler()), ('MultiLabelsKbins', MultiLabelsKbins(encode='ordinal', n_bins=3, strategy='uniform'))])
StandardScaler()
MultiLabelsKbins(encode='ordinal', n_bins=3, strategy='uniform')
[16]:
Xt = preproc.transform(X_train)
print("-> Xt.shape : ", Xt.shape)
print("-> Xt, 10 rows :\n", Xt[50:55])
-> Xt.shape : (120,)
-> Xt, 10 rows :
50 [0, 5, 6, 9]
51 [0, 4, 6, 9]
52 [0, 5, 6, 9]
53 [2, 4, 8, 11]
54 [0, 4, 6, 9]
dtype: object
Now we can add SlimClassifier to the pipe
[17]:
items = set(item for transaction in Xt for item in transaction) # used by SLIM to optimize results for small datasets
# without it, our test dataset may contain only a few items and the codetable will not have all items (and vice versa), so it is incomplete and this can affect the quality of the results
pipe = Pipeline([
('preproc', preproc),
('SlimClassifier', SlimClassifier(items=items))
])
[18]:
pipe.fit(X_train,y_train)
[18]:
Pipeline(steps=[('preproc', Pipeline(steps=[('StandardScaler', StandardScaler()), ('MultiLabelsKbins', MultiLabelsKbins(encode='ordinal', n_bins=3, strategy='uniform'))])), ('SlimClassifier', SlimClassifier(items={0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}))])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Pipeline(steps=[('preproc', Pipeline(steps=[('StandardScaler', StandardScaler()), ('MultiLabelsKbins', MultiLabelsKbins(encode='ordinal', n_bins=3, strategy='uniform'))])), ('SlimClassifier', SlimClassifier(items={0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}))])
Pipeline(steps=[('StandardScaler', StandardScaler()), ('MultiLabelsKbins', MultiLabelsKbins(encode='ordinal', n_bins=3, strategy='uniform'))])
StandardScaler()
MultiLabelsKbins(encode='ordinal', n_bins=3, strategy='uniform')
SlimClassifier(items={0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11})
[19]:
y_preds = pipe.predict(X_test)
print("-> Predictions : ", y_preds)
print(f"-> Pipe Accuracy : {round(pipe.score(X_test, y_test)*100,1)} %")
-> Predictions : [0 1 1 0 2 1 2 0 0 2 1 0 2 1 1 0 1 1 0 0 1 1 2 0 2 1 0 0 1 2]
-> Pipe Accuracy : 96.7 %
OneVsRest classifier for more than 2 classes¶
The SLIM algorithm is also compatible with scikit-learn to be used from other classifiers like One-vs-the-rest (OvR) (https://scikit-learn.org/stable/modules/generated/sklearn.multiclass.OneVsRestClassifier.html). The limitation of this method is that the classifier only works for multiclass classification problems while the embedded classifier works for both binary and multiclass problems.
[20]:
from skmine.itemsets import SLIM
from sklearn.preprocessing import MultiLabelBinarizer
[21]:
class TransactionEncoder(MultiLabelBinarizer): # pandas DataFrames are easier to read ;)
def transform(self, X):
_X = super().transform(X)
return pd.DataFrame(data=_X, columns=self.classes_)
[22]:
transactions = [
['bananas', 'milk'],
['milk', 'bananas', 'cookies'],
['cookies', 'butter', 'tea'],
['tea'],
['milk', 'bananas', 'tea'],
]
te = TransactionEncoder()
D = te.fit(transactions).transform(transactions)
D
[22]:
bananas | butter | cookies | milk | tea | |
---|---|---|---|---|---|
0 | 1 | 0 | 0 | 1 | 0 |
1 | 1 | 0 | 1 | 1 | 0 |
2 | 0 | 1 | 1 | 0 | 1 |
3 | 0 | 0 | 0 | 0 | 1 |
4 | 1 | 0 | 0 | 1 | 1 |
[23]:
slim = SLIM()
codetable = slim.fit(D).transform(D)
codetable
[23]:
itemset | usage | |
---|---|---|
0 | [bananas, milk] | 3 |
1 | [tea] | 3 |
2 | [cookies] | 2 |
3 | [butter] | 1 |
We keep this codetable in mind, as we will later use it to interpret our predictions
First “predictions”¶
We define a new transactional dataset, composed with different itemset. Let’s note \(x\) an itemset like the first one here :['bananas','milk]
. From the fitted codetable on dataset D, we derived : - compute code length of \(x\), namely \(c_l(x)\), that lie in \([0,+\infty[\) . It is obtained by method .get_code_length
- if x has the shortest code length, it means that \(x\) is close to fitted dataset. A method .decision_function
is implemented to reflect this
closeness by a score of \(x\). Lowest code length gives highest scores. To get probabilities (values in \([0;1]\) like logit, in sigmoid output), an negative exponential function is used decision_function
\((x)\) = \(\exp(-0.2 \times c_l(x))\)
[24]:
new_transactions = [
['bananas', 'milk'],
['milk', 'sirup', 'cookies'],
['butter', 'tea'],
]
new_D = te.transform(new_transactions)
new_D
/home/cregan/miniconda3/envs/test_skmine/lib/python3.8/site-packages/scikit_learn-1.2.2-py3.8-linux-x86_64.egg/sklearn/preprocessing/_label.py:895: UserWarning: unknown class(es) ['sirup'] will be ignored
warnings.warn(
[24]:
bananas | butter | cookies | milk | tea | |
---|---|---|---|---|---|
0 | 1 | 0 | 0 | 1 | 0 |
1 | 0 | 0 | 1 | 1 | 0 |
2 | 0 | 1 | 0 | 0 | 1 |
[25]:
codes_length = slim.get_code_length(new_D).round(3)
scores = slim.decision_function(new_D).round(3)
pd.DataFrame([new_transactions, codes_length, scores], index=['transaction', 'code length' , 'score']).T
[25]:
transaction | code length | score | |
---|---|---|---|
0 | [bananas, milk] | 1.907 | 0.683 |
1 | [milk, sirup, cookies] | 6.229 | 0.288 |
2 | [butter, tea] | 4.814 | 0.382 |
Built-in interpretations¶
Now we can interpret codes for the new data, directly by looking at the codetable inferred from training data
First observations
[milk, sirup, cookies]
has the code length, so the smallest score. You can see it containsmilk
,sirup
andcookies
. From the codetable we seemilk
andcookies
are not grouped together, whilesirup
has never been seen[bananas, milk]
has the lowest code length, so the highest score. It containsbananas
andmilk
, which are grouped together in the codetable and have high occurence in the training data.
Shortest code wins !!¶
Next, we are going to use an ensemble of SLIM encoding schemes, and utilize them via a OneVsRest
methodology, to perform multi-class classification. The methodology is very simple
We clone our base estimator as many time as we need (one per class)
We fit every estimator on entries corresponding to its class in the input data
When calling
.predict
, we actually call.decision_function
and get scores for every classThe shorted code wins : we choose the class with the lowest code length, so the highest score for a given transaction
[26]:
from sklearn.multiclass import OneVsRestClassifier
from sklearn.pipeline import Pipeline
[27]:
pipe = Pipeline([
('transaction_encoder', TransactionEncoder(sparse_output=False)),
('slim', SLIM()),
])
[28]:
transactions = [
['milk', 'bananas'],
['tea', 'New York Times', 'El Pais'],
['New York Times'],
['El Pais', 'The Economist'],
['milk', 'tea'],
['croissant', 'tea'],
['croissant', 'chocolatine', 'milk'],
]
target = [
'foodstore',
'newspaper',
'newspaper',
'newspaper',
'foodstore',
'bakery',
'bakery',
]
[29]:
te = TransactionEncoder()
D = te.fit(transactions).transform(transactions)
D
[29]:
El Pais | New York Times | The Economist | bananas | chocolatine | croissant | milk | tea | |
---|---|---|---|---|---|---|---|---|
0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 |
1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
2 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
3 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 |
5 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
6 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 |
[30]:
ovr = OneVsRestClassifier(SLIM())
[31]:
ovr.fit(D, y=target)
ovr.estimators_ # 3 estimators SLIM , one per class
[31]:
[SLIM(), SLIM(), SLIM()]
[32]:
res = pd.DataFrame(ovr.decision_function(D).round(3), columns=ovr.classes_)
res['predictions'] = ovr.predict(D)
res
[32]:
bakery | foodstore | newspaper | predictions | |
---|---|---|---|---|
0 | 0.238 | 0.400 | 0.218 | foodstore |
1 | 0.116 | 0.142 | 0.234 | newspaper |
2 | 0.488 | 0.488 | 0.641 | newspaper |
3 | 0.238 | 0.238 | 0.366 | newspaper |
4 | 0.238 | 0.400 | 0.266 | foodstore |
5 | 0.596 | 0.291 | 0.266 | bakery |
6 | 0.596 | 0.160 | 0.102 | bakery |
Questions on binary OneVsRest classifier¶
[33]:
from sklearn.multiclass import OneVsRestClassifier
from sklearn.pipeline import Pipeline
pipe = Pipeline([
('transaction_encoder', TransactionEncoder(sparse_output=False)),
('slim', SLIM()),
])
[34]:
transactions = [
['milk', 'bananas'],
['tea', 'New York Times', 'El Pais'],
['New York Times'],
['El Pais', 'The Economist'],
['milk', 'tea'],
]
target = [
'foodstore',
'newspaper',
'newspaper',
'newspaper',
'foodstore',
]
[35]:
te = TransactionEncoder()
D = te.fit(transactions).transform(transactions)
D
[35]:
El Pais | New York Times | The Economist | bananas | milk | tea | |
---|---|---|---|---|---|---|
0 | 0 | 0 | 0 | 1 | 1 | 0 |
1 | 1 | 1 | 0 | 0 | 0 | 1 |
2 | 0 | 1 | 0 | 0 | 0 | 0 |
3 | 1 | 0 | 1 | 0 | 0 | 0 |
4 | 0 | 0 | 0 | 0 | 1 | 1 |
[36]:
ovr = OneVsRestClassifier(SLIM())
[37]:
ovr.fit(D, y=target)
ovr.estimators_
[37]:
[SLIM()]
[38]:
res = pd.DataFrame(ovr.decision_function(D).round(3),columns=['score'])
res['predictions'] =ovr.predict(D)
res
[38]:
score | predictions | |
---|---|---|
0 | 0.238 | foodstore |
1 | 0.268 | foodstore |
2 | 0.670 | newspaper |
3 | 0.400 | foodstore |
4 | 0.291 | foodstore |
For binary classification, OneVsRest create only one model and compare score of an input \(x\) with a threshold. For instance, SVM score is a signed distance to an hyperplane: if score is positive, then x is predicted in the current class. If scores lie in \([0,1]\), threshold is set to 0.5 (like activation threshold in sigmoid).
For SLIM, we don’t have distance to such a bundary and no reference for that threshold. So, OneVsRest is not suitable for binary classification. To classify, implemented SlimClassifier creates 2 models (one for each label) and just compare score for itemset \(x\).