machine_learning.gradient_descent

Implementation of gradient descent algorithm for minimizing cost of a linear hypothesis function.

Attributes

LEARNING_RATE

m

parameter_vector

test_data

train_data

Functions

_error(example_no[, data_set])

_hypothesis_value(data_input_tuple)

Calculates hypothesis function value for a given input

calculate_hypothesis_value(example_no, data_set)

Calculates hypothesis value for a given example

get_cost_derivative(index)

output(example_no, data_set)

run_gradient_descent()

summation_of_cost_derivative(index[, end])

Calculates the sum of cost function derivative

test_gradient_descent()

Module Contents

machine_learning.gradient_descent._error(example_no, data_set='train')
Parameters:
  • data_set – train data or test data

  • example_no – example number whose error has to be checked

Returns:

error in example pointed by example number.

machine_learning.gradient_descent._hypothesis_value(data_input_tuple)

Calculates hypothesis function value for a given input :param data_input_tuple: Input tuple of a particular example :return: Value of hypothesis function at that point. Note that there is an ‘biased input’ whose value is fixed as 1. It is not explicitly mentioned in input data.. But, ML hypothesis functions use it. So, we have to take care of it separately. Line 36 takes care of it.

machine_learning.gradient_descent.calculate_hypothesis_value(example_no, data_set)

Calculates hypothesis value for a given example :param data_set: test data or train_data :param example_no: example whose hypothesis value is to be calculated :return: hypothesis value for that example

machine_learning.gradient_descent.get_cost_derivative(index)
Parameters:

index – index of the parameter vector wrt to derivative is to be calculated

Returns:

derivative wrt to that index

Note: If index is -1, this means we are calculating summation wrt to biased

parameter.

machine_learning.gradient_descent.output(example_no, data_set)
Parameters:
  • data_set – test data or train data

  • example_no – example whose output is to be fetched

Returns:

output for that example

machine_learning.gradient_descent.run_gradient_descent()
machine_learning.gradient_descent.summation_of_cost_derivative(index, end=m)

Calculates the sum of cost function derivative :param index: index wrt derivative is being calculated :param end: value where summation ends, default is m, number of examples :return: Returns the summation of cost derivative Note: If index is -1, this means we are calculating summation wrt to biased

parameter.

machine_learning.gradient_descent.test_gradient_descent()
machine_learning.gradient_descent.LEARNING_RATE = 0.009
machine_learning.gradient_descent.m = 5
machine_learning.gradient_descent.parameter_vector = [2, 4, 1, 5]
machine_learning.gradient_descent.test_data = (((515, 22, 13), 555), ((61, 35, 49), 150))
machine_learning.gradient_descent.train_data = (((5, 2, 3), 15), ((6, 5, 9), 25), ((11, 12, 13), 41), ((1, 1, 1), 8), ((11, 12, 13), 41))