Saturday, August 15, 2015

Dear Sir Cedric and classmates (81515 lecture):

I am maximizing the use of this blog for my thoughts, not only on the assigned readings, but also on the lecture discussion =]

During the lecture, the idea of specific functionality that are implemented on the network layer vs those that are implemented on the application layer was given a little limelight (I remember we even have a slide dedicated to some practice exercise for this =] ). 

I just want to ask Sir, and fellow classmates, for a good example of a functionality that should be implemented on the network layer, a functionality that should be implemented on the application layer, and a functionality that can be implemented on either layers. 

Thanks =]

P.S.
will there be really a significant result if we try doing a functionality in a host, but it really is intended for the network?

Pancake of the Day: The Design Philosophy of the DARPA Internet Protocols

Ping:
    If the paper of Cerf and Kahn focuses on the "now" (during their time) of the internet , David Clark's paper gives emphasis on the "tomorrow". He refers to the internet as "Internet" (with the big "i"), which I think means that during that time, it is already established that there is a global interconnection of networks. Also, if the motivation for the former's paper is connecting the exiting networks, this paper aims to explore the possibility of having a multi-media network.

   Since this paper was written circa 15 years after the birth of internet, it depicts some improvements from the original work. It's like the Chiffon cake that evolved from the flat wheat cake eaten by our ancestors before. 

   First, the author mentioned the fundamental goal for the Internet - "develop an effective technique for multiplexed utilization of exiting interconnected networks." He gave the metrics to define what an "effective technique" is.

   Second, he explained the goals in detail. For the objective of survivability, the author gave birth to the term "fate-sharing". It suggests that if one network lost the message to be communicated to the other network, there would be no need to keep the state information. This is the opposite of replication where in state information must be protected. He also highlighted the "datagrams" which are stateless packet switches in connection to "fate-sharing".

   The next goal explained was the support to different types of services. Here he explained that the Internet, to achieve this goal, should not reply only on TCP, but also in UDP (User Datagram Protocol).

   Moreover, another objective is not only to connect the existing networks but also other military and commercial facilities.

   Detailed descriptions of other miscellaneous objectives were stated in the paper.



Pong:
   This is just my personal view on the paper. I found it somehow aggressive and questioning the original design for the internet. The author used the words "should be like this" or "should be like that", "would have been like this",  which I think gave the negative tone for me.

   On the other hand, I like how he saw the "future" of the internet. The goals that he enumerated were in-lined to that.
  
   Some questions that I had while reading the paper:

   1. On the discussion about survivability, the author said "the entities communicating should be able to continue without having to reestablish or reset the high level state of their conversation." What is a high level state of a communication?

   2. On the topic about fate sharing, it is said that "take this information and gather it at the endpoint of the net, at the entity which is utilizing the service of the network." Where is the endpoint? In the host? Where in the host? In the packet or segment? Will it be included in the ES or EM or REL in the header?

   3. On the explanation of TCP, it is indicated "this function was moved to the IP layer when IP was split from TCP, and IP was forced to invent a different method of fragmentation." I am curious to the new method of fragmentation that IP implemented.


Pancakes!
   I ran out of flour :(

Pancake of the Day: A Protocol for Packet Network Intercommunication

Ping:
    This paper by Vinton G. Cerf and Robert E. Kahn describes the internal structure of a packet switching network and explains the protocol that they used to make individual networks communicate with one another thus giving birth to the internet.

    First, they identified the characteristics of an individual packet switch network - ways of addressing, data size, time delays, restoration from failures procedures, and status information, routing and fault detection and isolation. Keep in mind that these features could be implemented in different ways in other networks. If that's the case, how would these networks be able to communicate with each other? Imagine a Chinese tourist asking a French baker about the price of his eclair while the baker is selling his baguette to a German passer-by who is asking a Finnish girl for directions and all of them are using their native languages!

    To address the issue on the differences in networks, the authors introduced a gateway. What does it do? Gateways receive a packet of data from one network and reformat the packet according to the requirement of the network where the packet would be sent. It's as if a gateway is a translator who knows how to speak in Chinese, French, German, and Finnish.

    Furthermore, the paper also discusses how transmission control program (TCP) handles the sending and receiving of messages of the processes within a network and within an internetwork. The protocol maximized the use of TCP - in network or port addressing,  chopping messages in segments, reassembling and sequencing the segments, retransmitting segments during losses, flow control, and connections and associations. The algorithms for these processes were discussed in detailed by the authors. TCP is like flour to pastries and bread!



Pong:
   I really enjoyed the paper. I was in awe while reading the algorithms about TCP processes, the creation of gateways, the headers in segments, etc. The details about the information header are complete. The fields answer the need for segmentation, retransmission, etc. I just have a question to the people behind packet network intercommunication project: How did you come up with these solutions? Honestly, I want to see the source codes. I want to know how they implemented the protocol.


   Also, some parts of the paper weren't clear to me. I must have missed out some details and didn't understand the explanation. I have two questions.

   1. In the part about gateways fragmenting packets, what if the source network has a smaller packet size than the destination network? So imagine the diagram given in the paper.

   
    Network A will send packets to Network C. The packets will traverse through Network B. Assume Network B has a smaller packet size so Gateway M fragments the packet to smaller pieces. Then going to Network C, if Network C has a bigger packet size than Network B, how will Gateway N send the packets? Gateways don't reassemble the packets. Does that mean the packets will be sent to Network C as smaller pieces due to fragmentation done by Gateway M? So the packet size in Network C will not be maximized? Will TCP in Network B fix the issue?

   2. In the part about flow control, the paper says that "the flow control mechanism can reduce the window even while packets are en route from the sender whose windows is presently larger." Why? Acknowledgements in the process header from the receiver to the sender has a "suggested window". If this suggested window was received by the sender and then sends the packets according to the suggestion, why would the receiver reduce the window? Or did I just mixed up the information from the paper and that the idea of  "suggested window" is actually the implementation of reducing the window? Nonetheless,  why would there be a need to reduce the window? When do you reduce the window or enlarge a window?


Pancakes! (with extra toppings)
    What if the internet was built in a different way? What if the researchers prioritized a different feature in the internet? What if instead of having a gateway that chop the packets, we have some sort of a program that will shrink them (like goo.gl that shortens url)?