(Insightful) Ramblings by batasrki

No subtitle needed

Implementing a Queue Using an Array

| Comments

Up tonight, I’m going to write a queue implementation using a vector in Clojure and a fixed-size array in Go. The reason for the latter is that the Data Structures course on Coursera shows an interesting way of making a queue work when the backing data structure is of fixed size.

Essentially, what I will do is implement a circular buffer. As items are popped off the front of the queue to be worked on, I’ll have two pointers that will wrap around to the beginning of the array as needed.

Before I get ahead of myself, though, I’m going to do the easy thing first and write a simple implementation in Clojure.

Queue in Clojure with a vector

As I said in the original post, Clojure has a list data structure that is optimized for pushing items to the front of it and a vector data structure whose optimization is for pushing to the back of it. The latter one sounds perfect for a queue implementation.

As per usual, I’m going to store the state in an atom.

1
2
3
4
5
6
7
8
9
10
11
12
(def my-queue (atom []))

(defn enqueue [item queue]
  (swap! queue conj item))

(defn dequeue [queue]
  (let [item (first @queue)]
    (reset! queue (rest @queue))
    item))

(defn empty*? [queue]
  (= [] @queue))

Here’s the usage of it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
user> (use 'tester.array-queue :reload)
nil
user> my-queue
#atom[[] 0x1b9ca292]
user> (enqueue 2 my-queue)
[2]
user> (enqueue 21 my-queue)
[2 21]
user> (enqueue 1 my-queue)
[2 21 1]
user> (empty*? my-queue)
false
user> (dequeue my-queue)
2
user> (dequeue my-queue)
21
user> (dequeue my-queue)
1
user> (empty*? my-queue)
true

The implementation is straightforward, eased by the tools built into Clojure. conj pushes to the back of a vector. Using an atom and its interface (swap! and reset!) allows me to easily pop the item off the queue and return it. Making new things in Clojure using the existing things is as simple and enjoyable as advertised.

Queue in Go using a fixed-size array

Now, I’m going to up the challenge a bit. Go is one of few languages intended to replace C. Well, at least, that’s how I look at it. It doesn’t have the niceties of other languages similar in age. There are no generics, for example, so there is no generalized data structure interface like there is in Clojure.

There is, however, a nice(-ish) implementation of an array. I’m going to try using that to create a queue.

enqueue first

Go seems to me to be a verbose language, not due to being overly ceremonious like Java, but because it seems as if it’s a DIY language. It gives you some basic stuff, but the rest is up to you. Some like that. I don’t know if I do.

Since I have nearly 0 experience in it, my implementation is likely to be circumlocutory (LOL, thesaurus). I’m going to split it into two sections: enqueue and dequeue. These are implemented as methods on a struct, so that I can encapsulate the computation of the readIdx and writeIdx. It’s kind of OOP in its approach.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
type ArrayQueue struct {
  readIdx  int
  writeIdx int
  buffer   [4]int
}

func (aq *ArrayQueue) enqueue(item int) error {
  newWriteIdx := -1
  aq.buffer[aq.writeIdx] = item

  if aq.writeIdx == len(aq.buffer)-1 {
    newWriteIdx = 0
  } else {
    newWriteIdx = aq.writeIdx + 1
  }

  aq.writeIdx = newWriteIdx

  if newWriteIdx == aq.readIdx {
    return errors.New("Queue is full")
  }

  return nil
}

The interesting part here is keeping track of the writeIdx value. Since I’m using a fixed size array, I need to detect when I’ve moved the writeIdx past the end of the array and reset it to the head. I also need to detect when I’ve ran out of space in the array, so that I don’t overwrite items in the queue. Here’s how that would look.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
func main() {
  queueables := [5]int{21, 11, 1, 3, 40}
  err := interface{}(nil)

  queue := ArrayQueue{readIdx: 0, writeIdx: 0}

  for i := 0; i < len(queueables); i++ {
    err = queue.enqueue(queueables[i])

    if err != nil {
      break
    }
  }
  fmt.Println(queue.readIdx, queue.writeIdx, queue.buffer, err)
}

The queueables slice is there just so that I can easily add items to the queue. One o the idioms in Go is to return an error type from a call. I am using that to print out the error when I’ve run out of space in my queue.

The fmt.Println outputs things nicely, 0 3 [21 11 1 3] Queue is full. Number 40 is not added to the queue, since there is no space left.

on to dequeue

The above test will only ever enqueue and it’ll run out of space quickly. For a queue to be useful, it needs to have items removed, as well. Removing items from a queue needs to update the readIdx pointer value. Having this value be equal to writeIdx would mean that the queue is empty.

1
2
3
4
5
6
7
8
9
10
func (aq *ArrayQueue) dequeue() int {
  item := aq.buffer[aq.readIdx]

  if aq.readIdx == len(aq.buffer)-1 {
    aq.readIdx = 0
  } else {
    aq.readIdx = aq.readIdx + 1
  }
  return item
}

The item is fetched using the existing readIdx value, then calculations are made to ensure that the readIdx also wraps around the end of the array (slice, whatever). This makes implementing peek() and empty() easy.

1
2
3
4
5
6
7
8
9
10
11
func (aq *ArrayQueue) peek() int {
  if aq.empty() {
    return -1
  }

  return aq.buffer[aq.readIdx]
}

func (aq *ArrayQueue) empty() bool {
  return aq.readIdx == aq.writeIdx
}

The full test output then looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ go build stack_array.go && ./stack_array

# initial queuing
0 0 [21 11 1 3] Queue is full

# first dequeue
21

# writeIdx wrapped around so 40 is now in the first array slot
# readIdx points to next item, 11
[40 11 1 3]

# dequeue
11

# dequeue
1

# dequeue
3

# readIdx wraps around and dequeues last item
40

# peek sees nothing, since queue is empty
-1 true

Challenges and conclusion

It’s fairly simple and easy to implement queues and stacks using dynamically-sized backends. Well, at least, it’s simple and easy to do the basics. The backend scales up and down as needed and the programmer is left with focusing on the behaviour of each data structure. The tradeoff is that queues and stacks that use dynamically-sized backends (like a linked list) are essentially unbounded. In the worst case scenario, this could cause OoM (Out of Memory) errors on the machine running the implementation.

Writing a basic bounded queue implementation using a circular array has been more challenging. It made me appreciate not just the complexities around keeping track of read and write indexes, but also API design. As the client programmer, I wouldn’t want to keep track of that. All I want is to push items onto a queue, remove them from it and check its state. This has been an illuminating exercise.

Onto trees!

Stacks and Queues Part 2

| Comments

Following on from the last post, tonight I am going to try and implement the stack using a linked list instead of an array.

Short background

As explained in the course, a linked list might be preferred over an array, because in languages like C, an array has to be of fixed size. Adding a one more element to a full stack will cause an overflow error at best and overwrite random memory addresses at worst. Each element of the linked list is dynamically allocated on the heap, which means that the stack’s size is unbounded. The tradeoff of using a linked list is increased memory size. The memory structure of each stack item needs to hold a pointer to the next item in the list, as well as the stored value.

Ruby implementation first

Ruby is a language I have far more experience in, so I’m going to start there. I’m going to reuse the Stack interface from the last post.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class Stack
  def self.push(item, store)
    store.write(item)
  end

  def self.top(store)
    store.read
  end

  def self.pop(store)
    store.read(remove: true)
  end

  def self.empty?(store)
    store.empty?
  end
end

I’m going to supplant the store parameter with a linked list implementation. The linked list needs to have two things, a place to store the value and a pointer to the next item in the list.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
class LinkedListStackContainer
  def initialize
    @head = nil
  end

  def write(item)
    node = Node.new(item, head)
    @head = node
  end

  def read(remove: false)
    if remove
      first_node = @head
      @head = @head.next
      first_node.value
    else
      @head.value
    end
  end

  def empty?
    head.nil?
  end

  attr_accessor :head
end

class Node
  attr_accessor :value

  def initialize(item, next_node)
    @value = item
    @next_node = next_node
  end

  def next
    next_node
  end

  private
  attr_accessor :next_node
end

The implementation of a linked list in an object-oriented language is straightforward. You need a Node that holds a value and a pointer to the next Node. A stack on top of a linked list consists of the following steps:

  • Add a new Node
  • Set its next_node pointer to the current value of head
  • Set head to point to the new Node

With these steps completed, top and pop mean reading the value of head and setting head to the next node in the list, respectively. An empty check is just checking that head points to nothing.

Here’s how that looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
2.3.0 :738 > store = LinkedListStackContainer.new
 => #<LinkedListStackContainer:0x007f9067453838 @head=nil>
2.3.0 :739 > Stack.push(1, store)
 => #<Node:0x007f906744b9d0 @value=1, @next_node=nil>
2.3.0 :740 > Stack.push(2, store)
 => #<Node:0x007f9067443b40 @value=2, @next_node=#<Node:0x007f906744b9d0 @value=1, @next_node=nil>>
2.3.0 :741 > Stack.top(store)
 => 2
2.3.0 :743 > Stack.empty?(store)
 => false
2.3.0 :744 > Stack.pop(store)
 => 2
2.3.0 :745 > Stack.pop(store)
 => 1
2.3.0 :746 > Stack.empty?(store)
 => true

Clojure implementation

I suspect that writing a linked list in Clojure will be just as awkward as trying to hide the array as the implementation detail in Ruby. I am going to try anyway.

Clojure has a few macros that allow programmers to write something like OO code. I’m talking about defprotocol and defrecord. I’m going to try using those for my experiment.

Firstly, I’ll define a record to hold my data. The data held is the same, a value and a next-node pointer.

1
(defrecord StackNode [val next-node])

Then, I’ll define functions that manipulate the stack. I need a helper function to resolve the top of the stack, as well as the expected ones.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
(defn head* [stack]
  (cond
    (nil? (:val stack)) nil
    :else stack))

(defn push* [item stack]
  (cond
    (nil? @stack) (reset! stack (StackNode. item nil))
    :else (reset! stack (StackNode. item (head* @stack)))))

(defn top* [stack]
  (:val @stack))

(defn pop* [stack]
  (let [val (:val @stack)]
    (reset! stack (:next-node @stack))
    val))

(defn empty*? [stack]
  (nil? (top* stack)))

I’m storing the current state in an atom, like before, which is there are reset! calls in the push* and pop* functions. The logic for push* is fairly simple. If the head* returns nil, it means we don’t have anything on the stack, so create a new instance of the defined record. If there is something there, create a new instance of the defined record and set its next-node value to the old instance.

pop* does nearly the opposite. It saves the value of the current instance, then resets the atom such that it points to the next instance, effectively discarding the current one.

empty*? and top* should be self-explanatory. I wanted to use defprotocol to define the read, write, empty? methods as in the Ruby version, but I still don’t fully understand how that works. Here is my desired implementation, so maybe someone can point me in the right direction.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(defprotocol MyStack
  (read* [stack & args])
  (write* [item stack])
  (empty? [stack]))

(extend-type clojure.lang.Atom
  MyStack
  (read* [stack & {:keys [remove] :or {remove false}}]
    (if remove
      (pop* stack)
      (top* stack)))
  (write* [item stack]
    (push* item stack))
  (empty? [stack]
    (empty*? stack)))

(read* my-stack)
(read* my-stack {:remove true})

That’s it for tonight. Next up, I’ll try on the queues for a size.

Stacks and Queues, Part 1

| Comments

I’m auditing the Data Structures course on Coursera. Auditing means I’m taking it for free and not paying them like they want me to. This also means I can’t submit solutions to quizzes. Instead, I will try to write things up here as I learn them.

First up, I’d like to attempt to implement a stack in Clojure and Ruby using an array as the base data structure.

Stack using an list in Clojure and an array in Ruby

A stack is known as a Last In, First Out data structure. The simplest visualization of a stack are dinner plates…well, stacked on top one another. You can’t take and use the bottom plate without removing all the plates on top of it. It’s a simple data structure, really. The API for a stack is typically the following functions:

  • push (add an item to the front)
  • pop (remove and return the top item)
  • top (return the top item without removing it)
  • empty? (is the stack empty?)

Clojure implementation

My appreciation of Clojure’s coolness is well-documented on this blog. This appreciation continues here. There are a few sequence-like data structures in Clojure, such as a list, vector, and map. For a while, I was confused as to the need for both a list and a vector. Tonight, though, the difference is clear and useful. A list is optimized for pushing to the front of it. A vector is optimized for pushing to the rear of it. This makes writing code for this blog post that much easier.

I’ll use the list as the basis for my stack.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
(def my-stack (atom '()))

(defn push* [item stack]
  (swap! stack conj item))

(defn top* [stack]
  (first @stack))

(defn pop* [stack]
  (let [item (top* stack)]
    (reset! stack (rest @stack))
    item))

(defn empty*? [stack]
  (= 0 (count @stack)))

I’m sticking the list in an atom in order to keep state around. I’m also adding * to the function names, so as not to clobber the built-in Clojure functions.

push*, top* and empty*? are straightforward to implement. Pushing onto the stack involves using conj on the list data structure which will add things to the front of the list. Returning the top element without removing it is also simple, as is comparing the number of items in the list to 0, which is our empty check.

pop* is a little trickier, as it’s a two step process. I need to remove the item from the stack, as well as return it. I can actually reuse top* for the first step and then reset my atom to the rest of the list as a means of removing that first one.

Here is how the API works:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
user> (use 'array-stack)
nil
user> @my-stack
'()
user> (push* 1 my-stack)
(1)
user> (push* 2 my-stack)
(2 1)
user> (top* my-stack)
2
user> (pop* my-stack)
2
user> @my-stack
(1)
user> (empty*? my-stack)
false
user> (pop* my-stack)
1
user> (empty*? my-stack)
true

Nice! As I keep saying, well-designed languages make things so easy to use. I wonder if I’ll be able to do this as easily in Ruby.

Ruby implementation

Since everything in Ruby is an object, that’s what I’ll start with. Now, Ruby only has a vector implementation, which might writing a stack implementation a bit more challenging. Luckily, the one thing I don’t need to worry about is the size of the stack.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class MyStack
  attr_accessor :store

  def initialize
    @store = []
  end

  def push(item)
    store.insert(0, item)
  end

  def top
    store.first
  end

  def pop
    store.shift
  end

  def empty?
    store.count == 0
  end
end

Huh. It’s actually pretty simple, as well. Well, after I dug into Ruby’s Array docs, it got simple. The insert method is pretty nice, though I suspect it is not nice at all from a performance point of view.

Here’s the API as implemented in Ruby:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
2.3.0 :065 > stack = MyStack.new
 => #<MyStack:0x007fd2fd0e87b0 @store=[]>
2.3.0 :066 > stack.push 1
 => [1]
2.3.0 :067 > stack.push 2
 => [2, 1]
2.3.0 :068 > stack.push 3
 => [3, 2, 1]
2.3.0 :069 > stack.top
 => 3
2.3.0 :070 > stack.store
 => [3, 2, 1]
2.3.0 :071 > stack.pop
 => 3
2.3.0 :072 > stack.store
 => [2, 1]
2.3.0 :073 > stack.empty?
 => false
2.3.0 :074 > stack.pop
 => 2
2.3.0 :075 > stack.pop
 => 1
2.3.0 :076 > stack.empty?
 => true

Reflect and conclude

I think this is a good stopping point. Soon (I was going to say tomorrow night, but let’s face it, these posts aren’t that regular), I will implement a queue using the list/vector data structures. I suspect that it will be equally as easy.

The differences between language philosophies are laid bare, I hope.

In Clojure, not only did I separate the functions that operate on the data structure from itself, I also managed to reuse a function I just created in order to implement another. To me, this is one of the core principles of functional programming and of Clojure. Each function is a self-contained unit of work that operates on the parameters passed into it. I could have just assumed that the atom holding my list is just available, but I am already in the habit of not making that assumption.

On the other hand, my habit in Ruby is to create a class that contains the data that I need to perform computations on and the functions that do those computations. This is the most-straightforward way to do things in Ruby. It falls squarely on the language’s golden path. I could, with greater effort, to do what I did in Clojure. I could have made two classes, one as the container of the underlying data and another that holds the functions to operate on it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
class StackContainer
  def initialize
    @store = []
  end

  def write(item)
    store.insert(0, item)
    store
  end

  def read(remove: false)
    if remove
      store.shift
    else
      store.first
    end
  end

  def empty?
    store.count == 0
  end

  private
  attr_accessor :store
end

class Stack
  def self.push(item, store)
    store.write(item)
  end

  def self.top(store)
    store.read
  end

  def self.pop(store)
    store.read(remove: true)
  end

  def self.empty?(store)
    store.empty?
  end
end

Usage:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
2.3.0 :271 > store=StackContainer.new
 => #<StackContainer:0x007f9068550c08 @store=[]>
2.3.0 :272 > Stack.push(1, store)
 => [1]
2.3.0 :273 > Stack.push(10, store)
 => [10, 1]
2.3.0 :274 > Stack.empty?(store)
 => false
2.3.0 :275 > Stack.top(store)
 => 10
2.3.0 :276 > Stack.pop(store)
 => 10
2.3.0 :277 > Stack.pop(store)
 => 1
2.3.0 :278 > Stack.empty?(store)
 => true
2.3.0 :279 >

I am likely over-complicating things, but that is because this style of writing code in Ruby is totally unfamiliar to me. Also, the underlying implementation is leaking through. I probably need to abstract further, but I honestly don’t know how. On further thought, the initial design looks better to me, as I only expose a very limited set of functions and I can change the underlying data structure and the implementations of the API functions. However, it’s also possible that a user of my stack API will realize that it’s really an array and start to depend on that.

EDIT: With help from my peeps at (Practicing Developer’s Slack channel)[https://practicingdeveloper.com/], I actually came up with a nicer implementation that doesn’t necessarily leak.

I would love to hear people’s thoughts on this. Has anyone had to implement their own stack data structure? What did you use as a basis? How did it work out for you?

Posting Links to Facebook

| Comments

Tonight, I’ll be doing something slightly different and actually, possibly relevant to my day job. I’m going to take the links I have saved and post them on my Facebook account. Using Clojure and ReactJS! It may be an interesting exploration into interfacing with Facebook or it may not. I have no clue and isn’t that lack of knowledge what makes the life worth living?

Let’s go

I’m guessing I can post straight from Javascript, but I’d like to track how links I share will be received. I can have a core.async process somewhere poll the Facebook API and get data. I’ll have to associate post IDs with the links I share.

I’ve created a Facebook App from my account. The next thing to do is copy the given JS into a file somewhere. I’m choosing to create a new file js/fb-integration.js and require it in the views.clj file at the end of the body declaration.

1
(include-js "/js/fb-integration.js")

After loading the app in the browser and checking there are no errors, I’m ready to do the next step. I want to log in and get permissions to post on a Facebook Page. I don’t need to do this unless there’s a desire to post a link to Facebook. So, I’m going to add a button into the Link component much like the delete button already added. The markup for the Link component looks like this now.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<div className={"card card-block"}>
  <h4 className="card-title">
    <a target="_blank" href={this.props.url}>{this.props.title}</a>
    <form onSubmit={this.handleDelete}>
      <input type="submit" value="X"/>
    </form>
    <!-- THE NEW THING! -->
    <form onSubmit={this.handleFBShare}>
      <input type="submit" value="Share!" />
    </form>
    <!-- END -->
  </h4>
  <p className="card-text">
    Created by {this.props.client} at {this.props.created_at}
  </p>
</div>

I now need to figure out how to invoke the log in modal and have it ask for permissions. Additionally, I need to find out what permissions I need. The documentation seems to suggest that I need the manage_pages and publish_pages permissions, so I’ll add that to the scope of permissions requested on log in. A few experiments in the browser yields results I want to see.

1
2
3
4
5
6
7
8
9
10
FB.login(function(resp) { console.log(resp) },
         {scope: ['manage_pages', 'publish_pages']});

//returns
{authResponse: Object, status: "connected"}

FB.api('/me/accounts', function(resp) { console.log(resp) });

//returns
{data: Array[2], paging: Object}

The FB.api call returns a list of pages with their IDs and access tokens that I can use to publish links to. I’m going to send that info to the Clojure backend and see what happens.

Initial front-end implementation

Since this is an experiment, I’m not concerned with the code quality. I just want it to work. Propagating the event from the Link component to the LinkListContainer component works much the same way as it did in the link deletion. I’m going to chain callbacks, which is widely accepted as “the wrong thing to do”, so don’t do this in a real application. The event handler will log in, ask for all the pages using the /me/accounts endpoint, grab the JS object representing the page I’d like to post to, and then send that page’s access token and ID to the server. The server will use that information to actually make the post.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// handlers above
handleFBShare: function(data) {
  var parent = this;
  FB.login(function(response) {
    if(response.status === "connected") {
      FB.api("/me/accounts", function(response) {
        var link_experiments_page = response.data.filter(function(page) {
          if(page.name === "Link Experiments") {
            return page;
          }
        })[0];
        $.ajax({
          url: parent.props.url + "/" + data.id + "/share",
          dataType: "json",
          contentType: "application/json",
          type: "POST",
          data: JSON.stringify({access_token: link_experiments_page.access_token, page_id: link_experiments_page.id}),
          success: function(data) {
            console.log(data);
          }.bind(parent),
          error: function(xhr, status, err) {
            console.error(parent.props.url +"/" + data.id + "/share", status, err.toString());
          }.bind(parent)
        });
      });
    }
  });
}
// render below

Clicking on the Share button illicits a 404, since the server has no endpoint set up. That’s up next.

Initial server-side implementation

This should be as easy as adding a route and a function that responds to the request.

1
2
3
4
5
(defn post-link-to-facebook
  [request]
  (println request)
  (let [id (Integer/parseInt (get-in request [:path-params :id]))]
    (ring-resp/response (json/write-str @links))))

All I want to do is inspect the request, but as I expected, this is simple. I really like well-designed languages and libraries. The data I sent from the client shows up in the json-params map, so I can extract the access token and the page ID as easily as the link ID.

I’m going to create a function that will take in the above data and make a post. I need to pull in the great http-kit library, as well, so that I can make HTTP requests in a simple manner. Firstly, I’m going to fill out the rest of the handler function.

1
2
3
4
5
6
7
8
(defn post-link-to-facebook
  [request]
  (let [link-id (Integer/parseInt (get-in request [:path-params :id]))
        link (first (filter #(= (:id %)) @links))
        page-id (get-in request [:json-params :page_id])
        access-token (get-in request [:json-params :access_token])]
    (create-a-post page-id access-token link)
    (ring-resp/response (json/write-str @links))))

With that done, I’m ready to actually make a post. Firstly, I’m adding http-kit to the list of required libraries.

1
2
(:require [org.httpkit.client :as http]
          ;; all the other things)

Now, I’m going to write a quick-and-dirty function to do the posting. I need to hit the graph.facebook.com domain with the ID of the page and its access token. The message field is the actual thing shown. I’ll keep this field really simple right now.

1
2
3
4
5
6
7
(defn create-a-post [fb-page-id fb-page-token link]
  (let [post-url (str "https://graph.facebook.com/" fb-page-id "/feed?message=Look%20at%20this%20link%20" (:url link) "&access_token=" fb-page-token)
        {:keys [status headers body error] :as resp} @(http/post post-url)]
    (if error
      (println "Failed, exception: " error)
      (println "Success, status: " status))
  ))

If I now reload the app and click Share, I get a post on the desired Facebook Page!

That is great. As much as I dislike Facebook’s privacy nonsense, I have to admit that it is really easy to do things such as this. Of course, I also always enjoy Clojure’s ability to make these things so, so simple, as well.

OK, so I have a few cleanup items left, but I’m happy that it only took a couple of hours to do this feature from scratch. The time taken includes researching everything from how to actually connect to Facebook using its JS SDK, finding the relevant Graph API docs, integrating that into the existing ReactJS app and writing the backend functionality.

TODO:

  1. Clean up that ugly URL creation
  2. Figure out how to post sane content
  3. Send a response to the UI so that a shared link doesn’t get shared again

Deletion on Server-side

| Comments

Previously, I wrote about how to delete a link from the list of links client-side in the new ReactJS version of my bookmarking app. Tonight, I will finish off that feature by enabling the deletion on the server.

First, a confession

It took me nearly a month to write the follow-up post for a few reasons, but the chief problem was this piece of code:

1
2
3
4
5
(defn destroy-link [request]
   (let [id (get-in request [:path-params :id])]
    (reset! links (remove #(= id (:id %)) @links))
    ;; one more line below
)

This looks straightforward, and it should be. Get an ID from the path parameters, match it to the ID key of each map in the list of maps, and remove the map whose value under the ID key matches the passed in parameter. However, this piece of code doesn’t work. Worse yet, it’s not reproducible in the REPL, if you try to substitute the parameter with its value.

Dumb mistakes

The reason the above code doesn’t work is that the path parameter is a string, not an integer. (= "1" 1) returns false and I don’t know how I feel about that. It is a dumb assumption on my part that either Clojure or Pedestal will do conversion for me or maybe it’s my lack of knowledge about the framework. In either case, the fix is simple.

1
2
3
4
(defn destroy-link [request]
  (let [id (get-in request [:path-params :id])]
    (reset! links (remove #(= (Integer/parseInt id) (:id %)) @links))
    (ring-resp/response (json/write-str @links))))

Casting the string to an integer makes the remove function work correctly, which then allows me to set a new state for the links atom.

Moving on

Now that the first item on the previous post’s TODO is done, I can move on with my life.

TODO

  1. Find a cleaner way to propagate the event to the root component
  2. Edit the title field of each link

Deletion Using ReactJS

| Comments

Tonight, I will start on enabling deletion of links in my ReactJS single-page app. It should be straightforward, but with new tech, you just never know.

Client-side first

On the client-side, I need add a small form that will submit the DELETE request to the server using the given ID. The most logical place for the markup is on the component, but I suspect that I’ll need to pass the event up to the root component in order to keep the one-way data flow and easy re-rendering.

Markup

Firstly, I am going to amend the Link component to add the delete button.

1
2
3
4
5
6
7
8
9
10
11
  <div className={"card card-block"}>
    <h4 className="card-title">
      <a target="_blank" href={this.props.url}>{this.props.title}</a>
      <form onSubmit={this.handleDelete}>
        <input type="submit" value="X"/>
      </form>
    </h4>
    <p className="card-text">
      Created by {this.props.client} at {this.props.created_at}
    </p>
  </div>

Following the pattern laid out for submitting new links, I added the simplest of forms to the title part of the link markup. The onSubmit handler is a local function.

The function, much like for link creation will just pass its specific local state up to the root component that will take care of the actual request to the backend.

1
2
3
4
5
handleDelete: function(e) {
  e.preventDefault();
  var id = this.props.id;
  this.props.onLinkDelete({id: id});
}

Event propagation…the ugly way

I am certain I’m doing things the wrong way, because in order for me to have the root component do the actual communication and state rendering, I have to pass the callback function all the way from root component down to the lowest level.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
var LinkGroup = React.createClass({
// some code
  return (<Link onLinkDelete={callback} key={link.id}
                id={link.id} title={link.title}
                url={link.url} client={link.client}
                created_at={link.created_at} />);
// more code
});

var LinkList = React.createClass({
// some code
  return(<LinkGroup data={link_group}
                    onLinkDelete={callback} />);
// more code
});

var LinkListContainer = React.createClass({
// some code
  <LinkList data = {this.state.data}
            onLinkDelete={this.handleLinkDelete} />
// more code

This works, but I am sure it is not the way. I’ll have to research more.

The actual event handler

Now that I’m passing the handler into the correct spot and returning correct data, I’m going to write the simplest handler that can possibly work.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
handleLinkDelete: function(data) {
  $.ajax({
    url: this.props.url + "/" + data.id,
    dataType: "json",
    contentType: "application/json",
    type: "DELETE",
    success: function(data) {
      this.setState({ data: data });
    }.bind(this),
    error: function(xhr, status, err) {
      console.error(this.props.url + "/" + data.id, status, err.toString());
    }.bind(this)
  });
}

This is just the simple, standard jQuery-based AJAX call. The server will return the updated atom that the ReactJS will use to re-render the UI.

That’s it for tonight. Tomorrow, I’m going to hook up the backend.

TODO

  1. Hook up the backend
  2. Find a cleaner way to propagate the event to the root component

Experiments Part 9

| Comments

Previously in Experiments, part 8,…

I got SSE working on the server-side and was happy about that.

Now…

Tonight, I’m going to hook that up to ReactJS. I am questioning how to do that, though. I know that I want to get, and then set the state of the root component. I wonder what ReactDOM.render returns. Does it return the actual component?

Time for an experiment

1
2
3
var component = ReactDOM.render(<LinkListContainer url="/api/links" />, document.getElementById("content"));

console.log(component);

Very simply, I am assigning the return value of ReactDOM.render to a variable and outputting that to the console. Thank $DEITY for browser consoles. Reloading the page and inspecting the Object returned shows me that it is the root component. Great, I love when software behaves how I think it should.

This makes things quite simple. I will pass the component into the EventSource callback function and do things.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
evtSource.addEventListener("link-update", function(e) {
  var currentData = component.state.data;
  var replacement = JSON.parse(e.data);

  var toReplace = currentData.filter(function(obj) {
    if(obj.id === replacement.id) {
      return obj;
    }
  })[0];


  if(toReplace !== undefined) {
    var idx = currentData.indexOf(toReplace);
    currentData.splice(idx, 1, replacement);
    component.setState({data: currentData});
  }
});

The component gives me access to the data attribute holding the collection of links. The link-update event will send me a whole link record. Having those two things enables me to:

  • Find the record that needs replacing, which is accomplished by matching ID fields.
  • Splicing the new record into the position where the old one was.
  • Updating the component with the new data using the standard setState call.

Simple and easy. I like that. Here is how that looks. Keep watching the top-left corner. You will see the initial value for the title being the URL, then see it change to the actual title.

What next

In some respects, this is now better than what I already have running. In others, it’s not there yet. There are few other things I’d like to add, though. Firstly, the title fetching code can only handle HTML pages, but I like to save links to PDFs, as well. I have a bit of working code somewhere that I’ll look to integrate. Secondly, I’d like to archive or delete the links I don’t want anymore. Thirdly, I will want to persist the in-memory collection to some data store and be able to retrieve it without modifying a lot.

Well, until the next blog post, good night.

Experiments Part 8

| Comments

Previously in Experiments, part 7,…

I switched from Compojure to Pedestal in order to take advantage of its support for Server-sent Events. I want to use that to send background updates to the client on completion of a task.

This took a while

I expected this feature to be done very quickly. Up to now, every library I had to integrate into my app did so nearly seamlessly. It was a matter of reading the appropriate section of the library’s documentation, doing a few REPL experiments, integrating the code and doing manual testing.

SSE integration did not go like that. Not at all. It’s mostly my fault, because I did not fully understand how the whole feature works. However, I do have to also blame Pedestal’s SSE guides. Firstly, the guides section does not align with the examples section. Secondly, their example of an SSE usage is the epitome of useless blog post code. I don’t know if I have the permission to copy the code here, so I’ll link to it.

The problem with this example is that this isn’t how I expect anyone to use the SSE support. This code generates 20 events on the server and sends them one by one down to the client, one second apart. It then closes the channel. It’s concise, but it doesn’t help at all. Some of the questions I had when I looked at this code were:

  1. Uh, can I have the implementation in a function in another namespace?
  2. Can I use data from a core.async channel?
  3. If so, how?

My first crack at this nearly worked. By nearly, I mean, I got the first event as expected, which was great! Submitting the second link gave me no event back in the browser. It took me a better part of 2 weeks to figure out just what the hell is the problem.

In the beginning…

I want to try and re-construct this painful process, because I truly believe that others will get snagged on this rough edge in Pedestal. Maybe, just maybe, providing a failure case and a solution in this widely blog series will help someone somewhere.

Adapting the code from the guide I linked above, I end up with something roughly like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
;; Add another channel to the set there already.
;; This channel will receive the link with the title applied
;; It will then forward that onto the SSE channel completing the pipeline
(def update-chan (chan))

;; Put the record on the channel after processing it
(defn update-atom []
  (go
   (let [record (<! title-chan)]
     (log/trace "Got title" (:title record))
     (log/trace "Updating the atom")
     (swap! links update-title record)
     (>! update-chan record))))

;; SSE functions as I understand them being needed

;; This one does the actual work
(defn stream-link-updates [channel]
  (go
   (let [record (<! update-chan)]
     (>! channel {:name "link-update" :data (json/write-str record)}))))

;; This one is called by the route setup below and is the inital point into the whole thing
(defn stream-ready [channel ctx]
  (stream-link-updates channel))

;; This is added to `defroutes` to set up a SSE streaming
["/stream" {:get [::stream (sse/start-event-stream stream-ready)]}]

This is the server-side implementation as I understand it and how I would like it to fit into my app. Since core.async allows me to set up a pipeline, I would like the SSE event to be at the end of that pipeline. I’ve used go blocks for all pipeline parts, so it makes sense to me to do so here, too.

I need to name the event, so that I can attach a JS listener to it and convert the record into JSON in order to send it to the client.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var evtSource = new EventSource("/stream");
evtSource.onmessage = function(e) {
    console.log("in onmessage");
    console.log(e);
}

evtSource.onerror = function(e) {
    console.log("in onerror");
    console.log(e);
}

evtSource.addEventListener("link-update", function(e) {
    console.log("in event listener");
    console.log(e);
    console.log(JSON.parse(e.data))
});

I’m keeping the JS side extremely simple for now. All I want to see is an event in the console log every time I submit a link.

With all that set up, I fire up the REPL and start the app server.

1
lein repl
1
blog-post-app.server=>(run-dev)

I visit http://localhost:8080, post a test link and I get a result! Whoop, there it is. I post another, nothing.

What is going on? How can I one work, but not another? I reload the browser and in the console, I see the expected event. This tells me that the record was put to update-chan, but it never got picked up on the other side and sent to the browser. In the REPL log output, I notice this exception:

1
2
3
4
5
6
7
8
9
ERROR i.p.http.impl.servlet-interceptor - {:line 105, :msg "An error occured when async writing to the client", :throwable #error {
 :cause "Broken pipe"
 :via
 [{:type org.eclipse.jetty.io.EofException
   :message nil
   :at [org.eclipse.jetty.io.ChannelEndPoint flush "ChannelEndPoint.java" 200]}
  {:type java.io.IOException
   :message "Broken pipe"
   :at [sun.nio.ch.FileDispatcherImpl write0 "FileDispatcherImpl.java" -2]}]

Broken pipe means that a connection got shut down uncleanly on one side of it. That does not match my understanding of things. Pedestal is supposed to be keeping that connection open for me. It says in that guide that it sets up a heartbeat to ensure that. Why does it shut down uncleanly then?

The quest…

I then went off to find the reason why this happened and how I can fix it. I first opened an issue in Pedestal’s Github repo, even though I didn’t ever believe that it was a bug. I guess this is the accepted norm in the Ruby community and I got used to it. One of the maintainers was polite, helped me out a bit with the code and pointed me to the mailing list. I would like to note, though, that the first thing he says I should be aware of isn’t supported by the guides. In the guide, the stream-ready-fn clearly sets up a function that sends events. I have not seen any counter-examples, but maybe my searching skills aren’t up to par.

The crucial thing in that explanation is that I need a go-loop function rather than a go one. OK, that’s easy to fix.

1
2
3
4
(defn stream-link-updates [channel]
  (go-loop []
   (let [record (<! update-chan)]
     (>! channel {:name "link-update" :data (json/write-str record)}))))

I add go-loop to the list of functions referred to from core.async, reload the server, run the same test process, and I get the same result. This time, though, I get the broken pipe error almost immediately after the successful SSE message.

I post to the mailing list next, as suggested by the maintainer. I’ve yet to receive a response there. I then turn to the Clojurians Slack channel. People here are awesome and I am grateful for their help and understanding.

Jonah Benton (I can’t find his Twitter nor Github profiles) helped me out tremendously. He provided me with a real clue as to what was going on.

The solution…

This goes back to me not fully understanding nor reading through Pedestal documentation. Jonah said off-handedly that Pedestal’s interceptors always take a parameter when they’re being called. SSE was just another interceptor and I should have my functions return either the event channel or context all the way back to the original stream-ready function. This is exactly what I ended up doing.

1
2
3
4
5
6
7
;; Loop, but always return the event channel that was passed in
(defn stream-link-updates [channel]
  (loop []
    (let [record (<!! update-chan)]
      (>!! channel {:name "link-update" :data (json/write-str record)})
      (recur)))
  channel)

Actually, I ended up doing a hybrid of two advices. I return the channel as advised by Jonah, but I also make sure that I set up an infinite loop, because I don’t know when the next update will come. As I expected initially, this makes the whole thing work.

Reflection…

It took me quite a few nights of frustration to get to this point. I should have read through Pedestal’s docs. In my defence, though the word “interceptor” is mentioned, there were no links to the Interceptors documentation. Was I expected to read all the documenation provided? I guess so.

The mailing list seems dead, to be honest. I don’t know how many views there are on my post right now, but 0 responses in 3 weeks does not look well for that particular medium. If you are interested in Clojure, though, the Clojurians Slack channel is the place to be.

I am happy that I got the implementation working as I originally envisioned it. I still need to handle errors and different content types (like, what happens if I give it a PDF link?), but next, I will get ReactJS to…well, react to the incoming SSE message. I hope that this will less fun than what I had on the server.

Until the next blog post.

Experiments Part 7

| Comments

Previously in Experiments, part 6,…

I used core.async to background a slow task, namely fetching the HTML of the saved URL and parsing out the <title> tag.

Up next…

I need to do a bit of yak-shaving. As I alluded to before, I want to use Server-sent Events, to send the above background update to the client on completion. To do that, I have to switch libraries. Up to now, I’ve used Compojure, but after reading up on SSE and Clojure, I have been convinced that I need to use either Pedestal or yada.

After doing a little bit of research, I feel that Pedestal will suit me better as it uses core.async to do all of its async things and hey, I’m using that, too! With that settled, I am now realizing that I need to port all of my currently written code to this new way of doing things. There doesn’t seem to be a magical “take this Compojure app and make it a Pedestal app” command in Leiningen, so I am left with a few options. I think what I’ll do is generate a Pedestal app, then copy over the generated bits into the current app and smush them together. I hope it works!

Go, go, go, lein generator!

OK, in a directory above the current app directory, I need to run the Leiningen generator.

1
lein new pedestal-app prototype

This goes and does things and now I have a new directory. I now need to copy over bits from project.clj, the whole server.clj and service.clj, the config, log, and target directories. Then, in the new service.clj file, I need to copy over the code from handler.clj. All in all, this is how the project and service files look like.

project.clj
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
(defproject blog-post-app "0.0.1"
  :description "ReactJS bookmarker backed by experimental Clojure stuff"
  :url "http://particletransporter.io"
  :min-lein-version "2.0.0"
  :dependencies [[org.clojure/clojure "1.7.0"]
                 [org.clojure/data.json "0.2.6"]
                 [org.clojure/core.async "0.2.374"]
                 ;;[compojure "1.4.0"]

                 [io.pedestal/pedestal.service "0.4.1"]
                 [io.pedestal/pedestal.jetty "0.4.1"]
                 [ch.qos.logback/logback-classic "1.1.3" :exclusions [org.slf4j/slf4j-api]]
                 [org.slf4j/jul-to-slf4j "1.7.12"]
                 [org.slf4j/jcl-over-slf4j "1.7.12"]
                 [org.slf4j/log4j-over-slf4j "1.7.12"]

                 [enlive "1.1.5"]
                 [hiccup "1.0.5"]]
  :resource-paths ["config" "resources"]
  :uberjar-name "blog-post-app.jar"
  :profiles
  {:dev {:dependencies [[io.pedestal/pedestal.service-tools "0.4.1"]
                        [cider/cider-nrepl "0.10.0"]]}
   :uberjar {:aot [blog-post-app.server]}}
  :main ^{:skip-aot true} blog-post-app.server)
service.clj
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
(ns blog-post-app.service
  (:require [io.pedestal.http :as bootstrap]
            [io.pedestal.http.route :as route]
            [io.pedestal.http.body-params :as body-params]
            [io.pedestal.http.route.definition :refer [defroutes]]
            [io.pedestal.log :as log]
            [ring.util.response :as ring-resp]
            [clojure.data.json :as json]
            [clojure.core.async :as async :refer [>! <! go chan]]
            [net.cgrand.enlive-html :as html]
            [blog-post-app.views :as views]))

(def links (atom '({:id 1 :url "https://google.ca" :title "Google" :client "static" :created_at "2015-12-01"}
                   {:id 2 :url "https://twitter.com" :title "Twitter" :client "static" :created_at "2015-12-01"}
                   {:id 3 :url "https://github.com" :title "Github" :client "static" :created_at "2015-12-08"}
                   {:id 4 :url "https://www.shopify.ca" :title "Shopify" :client "static" :created_at "2015-12-08"}
                   {:id 5 :url "https://www.youtube.com" :title "YouTube" :client "static" :created_at "2015-12-08"})))

(defn fetch-url [url]
  (with-open [inputstream (-> (java.net.URL. url)
                              .openConnection
                              (doto (.setRequestProperty "User-Agent"
                                                         "ReadLaterCrawler/1.0 ..."))
                              .getContent)]
    (html/html-resource inputstream)))

(defn get-title [url]
  (first (map html/text (html/select (fetch-url url) [:title]))))

(def request-chan (chan))
(def title-chan (chan))

(defn update-title [data record]
  (map #(if (= (:id record) (:id %))
          (assoc % :title (:title record))
          %) data))
(defn update-atom []
  (go
   (let [record (<! title-chan)]
     (log/trace "Got title" (:title record))
     (log/trace "Updating the atom")
     (swap! links update-title record))))

(defn async-get-title []
  (go
   (let [record (<! request-chan)]
     (log/trace "Got the request, processing....")
     (>! title-chan (assoc record :title (get-title (:url record)))))))

(defn async-request-title [new-link]
  (go
   (log/trace "Putting the request on channel")
   (>! request-chan new-link)))

(defn home-page
  [request]
  (async-get-title)
  (update-atom)
  (ring-resp/response (views/index)))

(defn list-links
  [request]
  (ring-resp/response (json/write-str @links)))

(defn create-link
  [request]
  (let [next-id (inc (apply max (map #(:id %) @links)))
        new-link (:json-params request)
        new-link-with-fields (assoc new-link :id next-id :title (:url new-link) :created_at "2015-12-21")]
    (async-request-title new-link-with-fields)
    (swap! links conj new-link-with-fields)
    (ring-resp/response (json/write-str @links))))

(defroutes routes
  ;; Defines "/" and "/about" routes with their associated :get
  ;; handlers.
  ;; The interceptors defined after the verb map (e.g., {:get
  ;; home-page}
  ;; apply to / and its children (/about).
  [[["/" {:get home-page}
     ^:interceptors [(body-params/body-params) bootstrap/html-body]
     ["/about" {:get about-page}]
     ["/api/links" {:get list-links
                    :post create-link}
      ^:interceptors [bootstrap/json-body]]]]])
(def service {:env :prod
              ::bootstrap/routes routes
              ::bootstrap/resource "/public"
              ::bootstrap/type :jetty
              ::bootstrap/port 8080})

With that, I start the REPL and run the server from it.

1
lein repl
REPL
1
(server/start runnable-service)

A proof in form of a GIF

That’s it for now. The stage is set up for SSE, which will be the next thing I tackle. Until then.

P.S.

I deployed the app to Heroku, here. The code can be seen here.

Experiments Part 6

| Comments

Previously in Experiments, part 5,…

I added a synchronous way of fetching a title from an HTML page, using Enlive.

Let’s async this thing

OK, I want to get the quick response back, but still keep this new functionality. This is where I introduce core.async.

1
2
3
4
5
;; in project.clj
[org.clojure/core.async "0.2.374"]

;; in handler.clj's require
(:require [clojure.core.async :as async :refer [>! < ! chan go]])

This library is based on concepts presented in a book called Communicating Sequential Processes. The core premise is that certain types of tasks can be handed off to a different process/thread, much like how web developers use background queues to do tasks like sending e-mail, etc. The beautiful thing about Clojure’s implementation is that it’s just a set of macros around core language features. That book is really worth reading. I am saying that more to myself than anyone else reading this.

Using this library, I am going to set up a pipeline that does the synchronous fetch-and-parse in an asynchronous way. For that, I need to set up 2 channels and a few functions that put values and take values from the channel and update things in the background.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
(def request-chan (chan))
(def title-chan (chan))

(defn update-title [data record]
  (map #(if (= (:id record) (:id %))
          (assoc % :title (:title record))
          %) data))

(defn update-atom []
  (go
   (let [record (< ! title-chan)]
     (println "Got title" (:title record))
     (println "Updating the atom")
     (swap! links update-title record))))

(defn async-get-title []
  (go
   (let [record (< ! request-chan)]
     (println "Got the request, processing....")
     (>! title-chan (assoc record :title (get-title (:url record)))))))

(defn async-request-title [new-link]
  (go
   (println "Putting the request on channel")
   (>! request-chan new-link)))

Reading from the bottom up, the code does the following steps.

  1. Put the newly created link record to the request-chan channel
  2. Take the record off the request-chan channel, fetch the title from the HTML page, put the update onto title-chan channel
  3. Take the updated record of title-chan channel, update the links atom.

Because the update is done in the background, I won’t see the effect until server reload, but that’s fine for now. I have plans for that (did someone say Server-sent Events). Programming can be so much fun, sometimes.

I want to explain the update-title function before moving on, though. Here’s the source for it, again.

1
2
3
4
5
6
7
(defn update-title [data record]
  (map #(if (= (:id record) (:id %))
          (assoc % :title (:title record))
          %) data))

;; called with
;; (swap! links update-title record)

The function loops over the list of links, which are represented as hashmaps. If the id of the current item is equal to the one we want to update, we update the title value. If not, we keep just copy the link map into the new data set. I admit, this is a lot of rigmarole, but again, it’s fun to try not use a database as a crutch. Once I do commit to having a database, this code will likely go away.

Integrating the new way

Now that there’s a way for me to update the atom, I need to integrate that into the flow of a web request.

1
2
3
4
5
6
7
8
9
10
11
12
(GET "/" []
  (async-get-title)
  (update-atom)
  (views/index))

(POST "/api/links" request
  (let [next-id (inc (apply max (map #(:id %) @links)))
        new-link (:body request)
        new-link-with-fields (assoc new-link :id next-id :title (:url new-link) :created_at "2015-12-15")]
    (async-request-title new-link-with-fields)
    (swap! links conj new-link-with-fields)
    (json/write-str @links)))

When I load the view, I call the 2 background functions, so that go blocks are set up. The go blocks are where putting and taking from channels are executed. These functions will wait for new input to come in, but since I am not interested in the return value, calling them returns immediately.

The code that creates a link is changed, as well. Firstly, the synchronous fetch is reverted back to what I had initially, where title is the URL. Then, just before adding the new link, a request is submitted for the title attribute of the URL. This kicks off the asynchronous pipeline.

Incoming GIF!

This is how this looks now.

The speed of update is back, since we’re just pre-pending a record to an in-memory data structure. After a reload (or 2), the title is changed, demonstrating the background work.

That’s about it for now. Until the next post.