This post is adapted from my Gradient Descent in C post. I went into more detail in that post and this was just a quick port of the code to Java so there wasn't much changed. This is just a simple example of gradient descent in Java.
Open up Eclipse and go to File -> New -> Project and select Java.
Click finish or whatever and then right click on the folder "src" and add a new class.
Call it "GradDescent" and paste the code.
This version doesn't plot out anything. It only gets the values of theta0 and theta1.
There is a bug with the number of examples being off.. Setting it correctly messes up the output.
Note that the data set mimics f(x) = 2x
Here is the code:
public class GradDescent {
public static void main(String[] args) {
float x[] = {2.4f, 1.2f, 0.1f, 1.0f, 2f, 4f, 6f, 1.2f, 2.4f, 3.1f, 4.4f, 5.3f, 6.3f, 7.6f, 8.7f, 9.7f};
float y[] = {4.8f, 2.4f, 0.2f, 2.0f, 4f, 8f, 12f, 2.1f, 4.2f, 6.3f, 8.3f, 10.4f, 12.7f, 14.7f, 6.6f, 18.8f};
/*number of examples*/
int m = 14;
/*thetas and temp thetas*/
float theta0, temp0, theta1, temp1;
theta0 = 0.0f; temp0 = 0.0f;
theta1 = 0.0f; temp1 = 0.0f;
/*# of iterations and learning rate*/
int iterations = 140000;
float alpha = 0.07f;
int j = 0;
float h0 = 0.0f;
float h1 = 0.0f;
int i = 0;
for(i = 0; i < iterations; i++)
{
h0 = 0.0f;
h1 = 0.0f;
for(j = 0; j<m; j++)
{
h0 = h0 + ((theta0 + x[j]*theta1) - y[j]);
h1 = h1 + ((theta0 + x[j]*theta1) - y[j])*x[j];
}
temp0 = theta0 - (alpha*h0)/(float)m;
temp1 = theta1 - (alpha*h1)/(float)m;
theta0 = temp0;
theta1 = temp1;
}
System.out.println("Theta0: " + theta0 + " Theta1: " + theta1);
testGradientDescent(3f, theta0, theta1);
}
private static void testGradientDescent(float n, float theta0, float theta1)
{
float result = theta0 + (theta1*n);
System.out.println("Result: " + result);
}
}
it won't work for f(x)=3x^2
ReplyDelete