machine learning - How to predict a continuous dependent variable that expresses target class probabilities? -


My samples can be either class 1 or class 1, but for some of my samples, only one possibility for them Available to Class 1 So far, I have applied my target variable to a limit i.e. all classes are assigned to 1 and I have abandoned all those samples that have non-zero probability of 1 class. Then I applied a linear SVM to the data using SkitKit-learning.

Due to this, I extract quite a lot of training data from a distance. One thought was that I had to quit decryption and instead use regression, but generally it is not a good idea to approach classification by regression, for example, the estimated value of the interval [0,1] Does not guarantee.

The way in which nature of my attributes is similar to some of them, I am also present to be present for the related feature. For error, if I separated my attributes in the same way on which I separated the dependent variable, then there was not much difference.

You can make an estimate using sample weight - assign a sample to that class in which the highest There is a possibility, but weighing the probability of actually getting that sample related. For example: <1, 2, 3, 4] -> With the possibility the class will be .7x ========================= ================================================== With sample weight of ========= <[1, 2, 3, 4] y = [0] .7. You can also be normal so that the sample weight is between 0 and 1% (because your prospects and sample loads are only for this scheme from 5 to 1.) You can use non- Linear penalties can also be included.


Comments

Popular posts from this blog

java - Can't add JTree to JPanel of a JInternalFrame -

javascript - data.match(var) not working it seems -

javascript - How can I pause a jQuery .each() loop, while waiting for user input? -