Shit, I've fallen behind, but thank goodness I'm still learning. Class is moving along a little too quickly for me and I'm not finding enough explanation in Amit's Clojure in Action book's first half.

I've realized that I needed to take a step back and fill in some of the gaps in my mind. I did this by reading Apress' Practical Clojure by Luke VanderHart and Stuart Sierra, which I'll refer to as the

**PC**book in this post. I still need to dig into the state, parallel programming, macro, protocols and performance chapters, but it really helped explain some things I just wasn't understanding from Amit's book.

It will be interesting to see what the second half of the Clojure in Action is about, since that is where it seems to differentiates itself from the other beginner manuals. So, instead of talking about Java interop and concurrency this week, which is what we covered in class, I'll instead go over some of the materials I highlighted while reading the PC book, which further illuminated points I hadn't fully understood.

**Tail Recursion**

I didn't realize the

*recur*form in clojure was what is used to explicitly use tail call optimization so you don't blow the stack. In Scheme this happens automatically whenever a recursive call is in tail position, not so with Clojure, but then you know that you are in tail recursion when you use

*recur*!

And here is is a great explanation of why tail call recursion is able to optimized, from page 39 in PC, "If the return value of the “outer” function is wholly delegated to the “inner” function, the call is in tail position. If the “outer” function does anything with the value returned from the inner function except just return it, it is not tail recursive and cannot be optimized. This makes sense when the nature of the call stack is considered; if a call is in tail position, then the program can effectively “forget” that it was called recursively at all and delegate the entire program flow to the result of the inner function. If there is additional processing to do, the compiler can’t throw away the outer function. It has to keep it around in order to finish computing its result."

I also benefited from seeing this refactor using the

*loop*special form to show how it is much more compact.

Here is Newton's algorithm to recursively calculate the square root of a number:

(defn abs "Calculates the absolute value of a number" [n] (if (< n 0) (* -1 n) n)) (defn avg "returns the average of two arguments" [a b] (/ (+ a b) 2)) (defn good-enough? "Tests if a guess is close enough to the real square root" [number guess] (let [diff (- (* guess guess) number)] (if (< (abs diff) 0.001) true false))) (defn sqrt "returns the square root of the supplied number" ([number] (sqrt number 1.0)) ([number guess] (if (good-enough? number guess) guess (sqrt number (avg guess (/ number guess))))))

Now check out the refactor of the sqrt function using

*loop*making the code a little tighter:

(defn loop-sqrt "returns the square root of the supplied number" [number] (loop [guess 1.0] (if (good-enough? number guess) guess (recur (avg guess (/ number guess))))))

**Closures**

On page 45 in PC an example is given of a first-class function called rangechecker. The function is first-class because it returns a function, and you can think of the rangechecker function as a "function factory":

(defn rangechecker "Returns a function that determines if a number is in a provided range." [min max] (fn [num] (and (<= num max) (<= min num))))

To use the rangechecker function you can assign the function to a symbol in your REPL, then call it with a it's single arity parameter, which needs to be a number:

user=> (def myrange (rangechecker 2 10)) #'user/myrange user=> (myrange 5) true

The definition for a closure given on page 46 of PC is that they are

**first-class functions that contain values as well as code**. The rangechecker function is an example of this.

**Currying**

Page 46 of PC also talks about currying and the use of the partial function to illustrate the concept. Currying refers to the process of transforming a function into a function with fewer arguments by wrapping it in a closure. Since that is pretty heavy for me to understand just by reading I was thankful for this example using

*partial*, which incidentally also helped me better understand how to use

*partial*:

user=> (def times-pi (partial * 3.14159)) #’user/times-pi user=> (times-pi 2) 6.28318

This could have been written by manually definining the following function, which is really just a wrapper for the multiplication function, supplying specific values. Sounds like a closure, doesn't it:

(defn times-pi “Multiplies a number by PI” [n] (* 3.14159 n))

So now

*partial*makes sense to me and I have a better understanding of what closure are used for and what it means when someone talks about currying and that it has nothing to do with Indian food.

**Sequences**

I'm glad the PC book had it's own chapter on sequences, because I had not totally understood how they were used, why and what exactly they were. From page 73, "Sequences are simply a way to read, write and modify a data structure that is logically a collection of items."

Not only was this a great chapter describing sequences in detail it was followed by a chapter on the sequence API, which further explained the benefits one could get out of using sequences. I didn't realize a lot of the function we were using in our exercises such as

*range, partition, map, first, second, last, rest, reduce, apply*are all part of the sequence API. Ah, my synapses were firing in sequence after reading these two chapters from the PC book.

In looking at the anatomy of a sequence I was able to finally make the bridge between Lisp's

*car*and

*cdr*functions, which are the equivalents to Clojure's

*first*and

*rest*. They were just renamed to make more sense for modern programmers. The sequence golden rule is that conceptually all sequences share the conceptual model of being a singly-linked list, implemented in terms of

*first*and

*rest*.

To construct a sequence you can use the

*cons*function, which stands for construct. A sequence created by

*cons*is known as a "cons cell", which is just a simple first/rest pair. So here is how we chain multiple cons cells together programmatically, which just so happens to be a common Lisp idiom:

(defn make-int-seq [max] (loop [acc nil n max] (if (zero? n) acc (recur (cons n acc) (dec n))))) user=> (make-int-seq 5) (1 2 3 4 5)

The sequence chapter helped me get a firm understanding of laziness. "Briefly stated lazy sequences are a highly efficient way to operate on data too large to fit in system memory at once." (Page 77, PC)

This handy little combination of print statements and using the map function from the sequence API, which like most of the higher order functions in the API returns a lazy sequence is highly insightful into the behavior of a lazy sequence.

First a simple squaring function is defined, which produces print side-effects:

(defn square [x] (do (println (str "Processing: " x)) (* x x)))

To look at the behavior in your REPL try the following.

Assign the result of a call to the

*map*function using

*square*to a symbol. Then use

*nth*to get a specific item in the list. Call

*nth*again and you will see that the result was cached. Now print the

*map-result*symbol and you will see only values that were not yet cached are realized.

user =>(def map-result (map square '(1 2 3 4 5)) #'user/map-result user => (nth map-result 2) user => (nth map-result 2) user => (println map-result)

When using a lazy sequence calls are not made until they are absolutely required, the point at which the system needs to realize the lazy values.

Another great example which illustrates how lazy sequences can eat memory if there are reference to the realized sequence is given on page 82 in PC.

This version eats up heap space.

When

*nth*is called, the sequence is realized and references are created:

user=> (def integers (iterate inc 0)) #'user/integers user=> (nth integers 1000000) 1000000

Here is the more efficient version using

*nth*, which does not maintain references, by skipping the binding to the

*integers*symbol:

user=> (nth (iterate inc 0) 1000000) 1000000

**State and Parallel Programming**

I still have more to learn regarding state and concurrency so I have little to say on the matter at this point in time. However the following explanation of the coordinated state of

*refs*, made a lot of sense to me:

"The most common example of coordinated state is a transfer of funds between two bank accounts: money deposited into one account must also be subtracted from the others, and these two action must occur as a single, coordinated event, or not at all." (Page 97, PC)

Yay, for examples! I also really appreciate the follow-up chapter on state in Chapter 11, Parallel Programming of the PC book, which talks about how to perform concurrent operations, since the discussion of state alone does not address how a program becomes parallel and the strategies involved with doing this.

So that is where I'm at so far in week three of Amit's class. I'm feeling better about my understanding, but this detour to better understand the fundamentals has left me behind. It's now time to catch up and work on the assignments, which I've found really helps to imprint the new concepts. I can only learn so much from reading, at some point I have to switch over to doing, which is a whole different way of using the brain. Time to code!!!