After screwing up dissembling an iPhone 6 several months ago, once again I screwed up again last night trying to assemble a PC from components. I brazenly don’t think it as a shame because nowadays electronics are becoming so tiny and sophisticated. As manufacturers are trying to cram more and more stuff onto ever shrinking electronic floor plans, it is also becoming more and more DIY hostile. You probably are able to extend your imagination at software level, but tweaking at hardware level is hard, as can be proved by those scary moments when I accidentally broke a pin on the PCB board and pulling off a piece of plastic from my Samsung laptop case.
Category: Uncategorized
Covariance and Contravariance in Scala
One great feature brought by Scala is that you can do generics in very succinct ways, also convenient is making covariances and contravariances on generic classes, like writing something like class A[+T]
or class A[-T]
. The former is termed “covariance” meaning that if U
is a subclass of V
then A[U]
is a subclass of A[V]
, while the latter is termed “contravariance” and has the opposite meaning of if V
is a subclass U
then A[U]
is a subclass of A[V]
.
The wording of the above principles is easy to memorize, seemingly straightforward to comprehend, until you come across this example listed on https://docs.scala-lang.org/tutorials/tour/variances.html
class Stack[+T]{ def push[S >: T](elem: S): Stack[S] = new Stack[S]{ override def top: S = elem override def pop: Stack[S] = Stack.this override def toString: String = elem.toString + " " + Stack.this.toString } def top: T = sys.error("no element on stack") def pop: Stack[T] = sys.error("no element on stack") override def toString: String = "" } object VariancesTest extends App { var s: Stack[Any] = new Stack().push("hello") s = s.push(new Object()) s = s.push(7) println(s) }
This is an immutable stack implementation where every time you push to or pop away from the stack, you get a new Stack instance. Obviously, this example is about covariance, but it appears quite insane to me as I saw the line def push[S >: T](elem: S)
harshly contradicts the line var s: Stack[Any] = newStack().push("hello")
. Why? Because the method definition says that you must pass something that is a superclass of T
to method push, how dare you to push a string to a stack that is supposed to hold type Any
? Apparently, I was not alone being dubious, as you can see in the comment area there are people raising this same question and even harvested a few upvotes.
This question perplexed me for quite a while and after figuring out the rationale behind the scene I couldn’t help laughing. In fact, there are two things tangled together here. First, every time you invoke push
, you are essentially creating an anonymous subclass of Stack, and since Stack is covariant on T
, you cannot let T
appear in the argument list of Stack’s member methods, which is to ensure Liskov substitution principle. Here is an example to explain Liskov substitution principle. Suppose you have a class
abstract class Automaker[-T, +R] { // build a vehicle from the material provided def build(material: T): R }
and you create a class like this
class GermanAutomaker[Steel, Volkswagon] extends Automaker[Steel, Volkswagon] { def build(material: Steel): Volkswagon = ??? }
That’s perfectly fine! Now you want to define a subclass of GermanAutomaker
, say FancyGermanAutomaker
. Before you do that, take a moment to think what would people expect from FancyGermanAutomaker
? Yes, complete substitution. Whenever people need a GermanAutomaker
, a FancyGermanAutomaker
will work just as fine. So there are two meanings here – first, if people ask you to build something, they are at least expecting a Volkswagon as output, and you must be able to build a Volkswagon, e.g. Jetta, Passat, but never a BMW. Second, if people are feeding you steel as the building material, you must able to accept. Of course, if you are not picky you can accept a more general material such as metal, that’s fine, but never complain “I need stainless steel!” That being said, your FancyGermanAutomaker
could look something like this
class FancyGermanAutomaker[Metal, Passat] extends GermanAutomaker[Steel, Volkswagon]{ def build(material: Metal): Passat = ??? }
Translating Liskov substitution principle into plain English would be
In order to substitute another class, you must be able to consume a broader range of inputs and produce a stricter range of outputs.
Like back in those miserable slavery days a slave owner would only be interested in substituting his current slave for a new one if that new one is less picky about consuming food and more deft in making gadgets. Going back to the aforementioned example, it explains why we need to write def push[S >: T](elem: S): Stack[S]
, because you are defining a subsclass of Stack
, and Stack
is covariant on T
, so you cannot use T
in the method argument list. You can only use another class S
and explicitly point out that S
is a super type of T
.
Finally, we can come to the question why we can do s = s.push(7)
? Is Integer
a super class of Any
? Sure it is not, but Scala is smart enough to do an implicit conversion here. It will figure out on it own how many downward casts it is required to convert an Integer
to be a super or equal class of Any
. It will find out that the best it can do is to convert 7
into Any
and complete this push.
Principle #1 in coding
The following has been proved over and over again in software engineering, a.k.a. Principle #1 in coding
The peculiarness of a bug is in inverse proportion to the bug’s stupidness
Just bumped into two stupid things this afternoon. Let’s start with the first one. Pay attention to the following snippet of Scala code
val r: Runnable = new Runnable { override def run(): Unit = { Thread.sleep(1000) throw new Exception("I choose to die!") } } new Thread(r).start() while(true) { Thread.sleep(1000) Console.println("Seems everything is OK!") }
Well, the purpose of the above code is quite obvious since it is made to stand out – to show that the parent thread will not be affected by an exception thrown from its child thread. But it is very easily overlooked in a large complex program (yes you guessed it, I mean services) where threads are spawned hiddenly and crashed without showing any trace, and you are left wondering why your program is running but not doing things it is supposed to do. Even more annoying is – when you submit a task into an executor pool for periodical execution, and at some time point the task throws an exception, the executor pool would stop further executions without even a warning.
The second thing is about the following iterator interface
trait Iterator[A] { def hasNext: Boolean def next: A }
Can’t be more classic. We all know that hasNext
is there to guard the internal boundary, making sure the caller does not get null objects by blindly calling next
. The caveat is – are you always required to call hasNext before calling next? I initially did not think it is the case, but my colleague has a different idea, which means even if you get freshly new iterator and it contains lots of meaningful objects underneath, it throws you a null if you don’t do hasNext first.
Programming
Programming, as I guess, is just like many other craftmanship work, needs abundant guidance and tutoring. In today’s world there is an enormous amount of excellent open source projects out there and you could download them in just a few clicks. Theoretically, there are no hidden secret recipes and you can learn everything by studying the code, but very people become experts in this way, probably just like no one becomes a surgeon by doing post mortems on his/her own, no matter how many corpses are available to him/her.
Programming, given its engineering nature, is finding the quickest, simplest and most efficient way to solve your problem. It is difficult to master some techniques if you don’t make mistakes and learn from failures, but more often than not, you would want somebody to tell you what is the correct way to do things, simply because life’s too short to repeat mistakes. It’s so much better when some engineering catastrophe happens, you are able to recall instantly that somebody ever told you a possible cause that can lead to it, than to guess and troubleshoot without any clue for days. This is why people are always emphasizing on “platform”, “team” and “projects” when advising for software engineering career selection. Work hours do not translate into work experience equally in different environments. In a reputable corporation, you see projects that prove themselves by sustaining market pressure, so you know the architecture and implementation, especially after years of iterative development, is well thought and well designed. You see systems that handle millions of queries per second and what special treatment and care such a system needs. When you are asked to improve it, you need to first figure out all the internal wiring, how components interact with one another, and what functionality that snippet of fancy-looking code brings. Achieving this level of proficiency is impossible by sitting at home and studying alone. If you don’t have smart and expert level team members answer your questions, you would be stuck forever – for real practical problems, chances are very slim that you could find answers on Stackoverflow.
Hardware Engineer v.s. Software Engineer
Having been working in the software industry for quite some time, I would still occasionally miss those days spent working as a hardware engineer – or more specifically, an integrated circuit designer. In case you wonder what is the difference between the two kinds of work, I hereby explain in visualized form how they differ substantially, though in layman’s eyes they are quite similar because both require you to interact with software and both can be categorized as IT industry.
First off, here is the graphic interface of the software you use on a daily basis as circuit designer. Old fashioned if not ugly – because manufacturers of such software don’t give a damn. You got to use it no matter how user hostile it is, because you just don’t have any choice.
You also do programming as a digital circuit designer, but by using a very low level language called Verilog.
You code will get “compiled”, but not compiled in a sense as software engineers understand. Literally, your code is translated into a circuit schematic like the following.
Then comes the real “compilation” – turning your abstract design into something physical. This is an offsite process that can only happen in a factory called Fab. This process is typically hugely expensive and one shot (loosely speaking). Any bugs you let slipped into this phase are engineering catastrophes.
The “compilation” process usually takes quite some time, because there are lots of procedures need to be applied. The products are tiny little black colored chips, as shown in below (note that those chips are mounted on green PCB circuit board already).
Or they could look a little bit fancier if they are large and used in high-end electronics.
It is a great honor if chips designed by you ended up here.
An annoying thing is – though those chips are destined to sold to downstream electronics manufacturers as parts in their products, samples will be returned to you first for testing, making sure that they can work properly and reliably. Testing could be a very daunting process. You will need something like this. No need to say you will be chained in a test room with no windows, assisting test engineer to locate bugs.
What if there is a design flaw or bug? Can you fix it on the physical chip? More often than not that’s not possible – you need to fix it in code and let the factory do a new physical model. So the company loses money and the managerial side gets mad at you.
Now let’s see what I am interacting with as a (internet industry) software engineer on a daily basis.
First, you use very fancy-looking IDE software to write code in fancy-looking programming languages. IDE Softwares need to be fancy-looking, otherwise, they get replaced by competitors very quickly, and so do programming languages.
And then you compile, test and upload your code via networks to “servers”, or physically remote computers. They may sit thousands of miles away in places called data centers like below.
You don’t give a damn to them because most of the time they appear to you like the following – you control them by typing commands.
Then, da-da! your work is recognized. People see it in the following form, and they are like “wow” when you mention you (partially, of course) made it.
As you can imagine, compared to software developing, hardware developing is rather “dirty”. For hardware design, too many physical procedures need to be involved; the feedback loop is long; the cost of mistakes is prohibitively high; people working in hardware industry tend to be extremely conservative – they do not trust any code written by a newbie, no matter how correct it might look like. A semiconductor company needs to widely spread its workforce – sales engineers to boast in front of customers how wonderful your chips are, field application engineers to help customers make the end product in case they do not the expertise (almost a sure thing), in-house circuit designers to design the chip day and night, test engineers to figure out bugs in physical chips – though all of them are labeled “engineers”, they know very very little about the work done by engineers of other types.
With all these being said, I don’t mean I regret being ever a hardware engineer – that work experience has lent me so many unique perspectives in my current work. The two types of job touch the two poles of the IT industry respectively, which kind of makes me proud.