Linux Sysadmin Evolution

Constructive criticism of the failed Micro$oft academy



Ohh well, you know, after actually thinking about it for a minute or two was it obvious to me that this is more of a silly rant then anything constructive. But maybe one day if i remove some profanity from this page i will make something constructive out of it.

Quite alot of the evenings of the last few years i have spend on educating me more about my profession, the Information Technology. So i went to this Failed Micro$oft academy (some local community college) and here is what i have learned there. Basically the thing is quality of the 'factory'-like education (so its about quantity and not about quality) really sux.

Now here are something around 5 main aspects of the IT that you can learn. But they are all bastardized. The stuff that is being teached is all such a Micro$oft biased bullshit and nothing of what you really should be learning. (In the end i decided that it makes more sense to finish Linux Foundation System Administration online course than spend even more of my time where they dont even know what they are speaking about). For exanple i have already taken Introduction to Linux and it was actually fun :)

This are the 5 main aspects of IT that i think are importand and are at least correct to mention in this cotext:
1 - Computer Architecture and history of its development / evolution. (32bit PC only vs 16bit era)
2 - Operating Systems (Scheduling vs FHS)
3 - Databases (Relations as arrows between tables vs relations as in Datatype relations and not some other nonsense)
4 - Networking (OSI vs TCP/IP)
5 - Programming/Scripting (OOP fanatical madness vs The Bash/Shell and Open Source Scripting revolution)

Now lets look at each of them separetely:

1 - Computer Architecture and history of its development / evolution. (32bit PC only vs 16bit era)

The first ever step in starting to learn sometning about computers often starts with the computer architecture. But supposedly it looks like there exist some alternative histories for different people. I have seen it already a few times when people start with punch cards and then directly switch to the ISA bus and PSI bus and different intel processors generations, together with the AMD. Like the IBM PC was the first computer ever created. Like the legendary 8bit to 16bit era switch never hapened. Like Commodore Amiga wasnt the the one and only of its kind. Like it havent used the legendary 68k RISC microprocessor from the PDP mainframes. Like Linux has never been ported to 68k and it havent set it off as a most portable OS ever. History of Computers/The Rise of the Microcomputer

How Computers Work - Code.org

Steve and Bill bow befre C64

2 - Operating Systems (Scheduling vs FHS)

You see, some other lesson we started studying Operating Systems. My favorite subject. And there we go, we started to draw some round robin like sheduling schemas on a sheet of paper with a pensil. Really? Why the freaking hell you would start telling me about sheduling algorithms and dont even mention the FHS, the first ever thing you should be learning about Operating Systems. Read Andrew Tanenbaums - Modern Operating Systems, that even Linus Torvalds endorses. Here is what he writes:

The amount of space devoted to some of these topics is different than in some other books, however, reflecting my belief that students should learn about concepts that are of practical value in real systems, rather than those that are just of theoretical interest. For example, CPU scheduling is worth a section, not a whole chapter. Many complicated scheduling algrithms have been proposed and analysed in the literature, but most real systems just use some kind of simple priority or round robin scheme.

Happily there are some local Linux gurus who write introductory books on Linux and its even available in HTML form online.

Linux Sysadmin Evolution

And dont even get me started on how one 'security specialist' started telling us about how we should first back up our datacenter in case we had desided to Defragment it. Im sorry but this is some fat bullshit, this is so wrong on so many levels. Maybe u will defragment the ZFS too?..
how bout copy on write filesystem snapshots ZFS BTRFS

On a little bit more serious note, ive heard this should be a good book on operating systems Operating Systems: Three Easy Pieces


3 - Databases (Relations as arrows between tables vs relations as in Datatype relations and not some other nonsense)

Here we go again. When you start reading right books instead of listening to fools. You discover that a relation in the Database world has nothing to do with arrows between tables and everything to do with types of data. Just go and take a look at the J.C. Date's Database In Depth book, page 46 where he explains what a relation is:

I'll leave it as an exercise to interpret the suppliers relation in terms of the foregoing definition. However, I will at least explain why we call such things relations. Basically, each tuple in a relation represents an n-ary relationship, in the ordinary natural-language sense, among a set of n values (one value for each tuple attribute), and the full set of tuples in a given relation represents the full set of such relationships that happen to exist at some given time and, mathematically speaking, that's a relation. Thus, the "explanation" often heard, to the effect that the relational model is so called because it lets us "relate one table to another," though accurate in a kind of secondary sense, really misses the basic point. The relational model is so called because it deals with certain abstractions that we can think of as "tables" but are known, formally, as relations in mathematics.
Discovering JavaScript Object Notation with Douglas Crockford

PageViews
url and counting visits
Accurate Visits
UniqueVisitors
url and counting visits
Accurate Visits

4 Specialisation

4 - Networking (OSI vs TCP/IP)
OSI vs TCP/IP

Comparison with TCP/IP model

The design of protocols in the TCP/IP model of the Internet does not concern itself with strict hierarchical encapsulation and layering.[26]RFC 3439 contains a section entitled "Layering considered harmful".[27] TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols: the scope of the software application; the end-to-end transport connection; the internetworking range; and the scope of the direct links to other nodes on the local network. [28]

Despite using a different concept for layering than the OSI model, these layers are often compared with the OSI layering scheme in the following way:

These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in such things as the internal organization of the network layer document.[citation needed]

The presumably strict layering of the OSI model as it is usually described does not present contradictions in TCP/IP, as it is permissible that protocol usage does not follow the hierarchy implied in a layered model. Such examples exist in some routing protocols (e.g., OSPF), or in the description of tunneling protocols, which provide a link layer for an application, although the tunnel host protocol might well be a transport or even an application-layer protocol in its own right.

Some Red Hat girl has been saying that the Top Down Networking book supposed to be good but i seem to think that this one should be it: High Performance Browser Networking

5 - Programming/Scripting (OOP fanatical madness vs The Bash/Shell and Open Source Scripting revolution)
OOP is to writing a program, what going through airport security is to flying

OOP Criticism

The OOP paradigm has been criticised for a number of reasons, including not meeting its stated goals of reusability and modularity,[37] [38] and for overemphasizing one aspect of software design and modeling (data/objects) at the expense of other important aspects (computation/algorithms).[39] [40]

Luca Cardelli has claimed that OOP code is "intrinsically less efficient" than procedural code, that OOP can take longer to compile, and that OOP languages have "extremely poor modularity properties with respect to class extension and modification", and tend to be extremely complex.[37] The latter point is reiterated byJoe Armstrong, the principal inventor of Erlang, who is quoted as saying: [38]

The problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

A study by Potok et al. has shown no significant difference in productivity between OOP and procedural approaches. [41]

Christopher J. Date stated that critical comparison of OOP to other technologies, relational in particular, is difficult because of lack of an agreed-upon and rigorous definition of OOP;[42] however, Date and Darwen have proposed a theoretical foundation on OOP that uses OOP as a kind of customizable type system to support RDBMS. [43]

In an article Lawrence Krubner claimed that compared to other languages (LISP dialects, functional languages, etc.) OOP languages have no unique strengths, and inflict a heavy burden of unneeded complexity.[44]

Alexander Stepanov compares object orientation unfavourably to generic programming: [39]

I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras — families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting — saying that everything is an object is saying nothing at all.

Paul Graham has suggested that OOP's popularity within large companies is due to "large (and frequently changing) groups of mediocre programmers". According to Graham, the discipline imposed by OOP prevents any one programmer from "doing too much damage". [45]

Steve Yegge noted that, as opposed to functional programming: [46]

Object Oriented Programming puts the Nouns first and foremost. Why would you go to such lengths to put one part of speech on a pedestal? Why should one kind of concept take precedence over another? It's not as if OOP has suddenly made verbs less important in the way we actually think. It's a strangely skewed perspective.

Rich Hickey , creator of Clojure, described object systems as overly simplistic models of the real world. He emphasized the inability of OOP to model time properly, which is getting increasingly problematic as software systems become more concurrent. [40]

Eric S. Raymond , a Unix programmer and open-source software advocate, has been critical of claims that present object-oriented programming as the "One True Solution", and has written that object-oriented programming languages tend to encourage thickly layered programs that destroy transparency.[47] Raymond compares this unfavourably to the approach taken with Unix and the C programming language.

Functional Thinking: Paradigm Over Syntax

Computer languages history

Computer languages history, the prominent books at the time and other circus. :)

1) The other day i was chatting with a guy of around twenty years old who is studying computer security and i wanted to say that maybe using some programming language from the eighties is not the best idea to build some modern application. And i wanted to make my point by comparing on which hardware that eighties technology was used/invented. And compare it to todays hardware. This way the following timeline came up.
Here you can see for example that Java was invented before even the first widely available 32 bit consumer computer came out. I'm not even talking already about C++ which was an antire decade before that. And yes, only Demoscene hackers scream Amiga once in a while. :D

2) Then i wanted to add some books in between. Because you know i bought maybe like a hundred programming books in the last several years. Ofcourse i only half read half of them. But anyway. Here you can see the Unix Haters Handbook, which can tell you everything you need to know about antiquated C++. The Byte of Python is the best Scripting beginner book. And if you wan to move on futher after that The Book of Ruby is interesting too.

JavaScript:
The World's Most Misunderstood Programming Language

Douglas Crockford
www.crockford.com

JavaScript, aka Mocha, aka LiveScript, aka JScript, aka ECMAScript, is one of the world's most popular programming languages. Virtually every personal computer in the world has at least one JavaScript interpreter installed on it and in active use. JavaScript's popularity is due entirely to its role as the scripting language of the WWW.

Despite its popularity, few know that JavaScript is a very nice dynamic object-oriented general-purpose programming language. How can this be a secret? Why is this language so misunderstood?

The Name

The Java- prefix suggests that JavaScript is somehow related to Java, that it is a subset or less capable version of Java. It seems that the name was intentionally selected to create confusion, and from confusion comes misunderstanding. JavaScript is not interpreted Java. Java is interpreted Java. JavaScript is a different language.

JavaScript has a syntactic similarity to Java, much as Java has to C. But it is no more a subset of Java than Java is a subset of C. It is better than Java in the applications that Java (fka Oak) was originally intended for.

JavaScript was not developed at Sun Microsystems, the home of Java. JavaScript was developed at Netscape. It was originally called LiveScript, but that name wasn't confusing enough.

The -Script suffix suggests that it is not a real programming language, that a scripting language is less than a programming language. But it is really a matter of specialization. Compared to C, JavaScript trades performance for expressive power and dynamism.

Lisp in C's Clothing

JavaScript's C-like syntax, including curly braces and the clunky for statement, makes it appear to be an ordinary procedural language. This is misleading because JavaScript has more in common with functional languages like Lisp or Scheme than with C or Java. It has arrays instead of lists and objects instead of property lists. Functions are first class. It has closures. You get lambdas without having to balance all those parens.

Typecasting

JavaScript was designed to run in Netscape Navigator. Its success there led to it becoming standard equipment in virtually all web browsers. This has resulted in typecasting. JavaScript is the George Reeves of programming languages. JavaScript is well suited to a large class of non-Web-related applications

Moving Target

The first versions of JavaScript were quite weak. They lacked exception handling, inner functions, and inheritance. In its present form, it is now a complete object-oriented programming language. But many opinions of the language are based on its immature forms.

The ECMA committee that has stewardship over the language is developing extensions which, while well intentioned, will aggravate one of the language's biggest problems: There are already too many versions. This creates confusion.

Design Errors

No programming language is perfect. JavaScript has its share of design errors, such as the overloading of + to mean both addition and concatenation with type coercion, and the error-prone with statement should be avoided. The reserved word policies are much too strict. Semicolon insertion was a huge mistake, as was the notation for literal regular expressions. These mistakes have led to programming errors, and called the design of the language as a whole into question. Fortunately, many of these problems can be mitigated with a good lint program.

The design of the language on the whole is quite sound. Surprisingly, the ECMAScript committee does not appear to be interested in correcting these problems. Perhaps they are more interested in making new ones.

Lousy Implementations

Some of the earlier implementations of JavaScript were quite buggy. This reflected badly on the language. Compounding that, those implementations were embedded in horribly buggy web browsers.

Bad Books

Nearly all of the books about JavaScript are quite awful. They contain errors, poor examples, and promote bad practices. Important features of the language are often explained poorly, or left out entirely. I have reviewed dozens of JavaScript books, and I can only recommend one: JavaScript: The Definitive Guide (5th Edition) by David Flanagan. (Attention authors: If you have written a good one, please send me a review copy.)

Substandard Standard

The official specification for the language is published by ECMA. The specification is of extremely poor quality. It is difficult to read and very difficult to understand. This has been a contributor to the Bad Book problem because authors have been unable to use the standard document to improve their own understanding of the language. ECMA and the TC39 committee should be deeply embarrassed.

Amateurs

Most of the people writing in JavaScript are not programmers. They lack the training and discipline to write good programs. JavaScript has so much expressive power that they are able to do useful things in it, anyway. This has given JavaScript a reputation of being strictly for the amateurs, that it is not suitable for professional programming. This is simply not the case.

Object-Oriented

Is JavaScript object-oriented? It has objects which can contain data and methods that act upon that data. Objects can contain other objects. It does not have classes, but it does have constructors which do what classes do, including acting as containers for class variables and methods. It does not have class-oriented inheritance, but it does have prototype-oriented inheritance.

The two main ways of building up object systems are by inheritance (is-a) and by aggregation (has-a). JavaScript does both, but its dynamic nature allows it to excel at aggregation.

Some argue that JavaScript is not truly object oriented because it does not provide information hiding. That is, objects cannot have private variables and private methods: All members are public.

But it turns out that JavaScript objects can have private variables and private methods. (Click here now to find out how.) Of course, few understand this because JavaScript is the world's most misunderstood programming language.

Some argue that JavaScript is not truly object oriented because it does not provide inheritance. But it turns out that JavaScript supports not only classical inheritance, but other code reuse patterns as well.

Copyright 2001 Douglas Crockford. All Rights Reserved Wrrrldwide.

 

 Revenge of the Nerds

Want to start a startup? Get funded by Y Combinator.

May 2002

"We were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp."

- Guy Steele, co-author of the Java spec

What Made Lisp Different

When it was first developed, Lisp embodied nine new ideas. Some of these we now take for granted, others are only seen in more advanced languages, and two are still unique to Lisp. The nine ideas are, in order of their adoption by the mainstream,

  1. Conditionals. A conditional is an if-then-else construct. We take these for granted now, but Fortran I didn't have them. It had only a conditional goto closely based on the underlying machine instruction.

  2. A function type. In Lisp, functions are a data type just like integers or strings. They have a literal representation, can be stored in variables, can be passed as arguments, and so on.

  3. Recursion. Lisp was the first programming language to support it.

  4. Dynamic typing. In Lisp, all variables are effectively pointers. Values are what have types, not variables, and assigning or binding variables means copying pointers, not what they point to.

  5. Garbage-collection.

  6. Programs composed of expressions. Lisp programs are trees of expressions, each of which returns a value. This is in contrast to Fortran and most succeeding languages, which distinguish between expressions and statements.

    It was natural to have this distinction in Fortran I because you could not nest statements. And so while you needed expressions for math to work, there was no point in making anything else return a value, because there could not be anything waiting for it.

    This limitation went away with the arrival of block-structured languages, but by then it was too late. The distinction between expressions and statements was entrenched. It spread from Fortran into Algol and then to both their descendants.

  7. A symbol type. Symbols are effectively pointers to strings stored in a hash table. So you can test equality by comparing a pointer, instead of comparing each character.

  8. A notation for code using trees of symbols and constants.

  9. The whole language there all the time. There is no real distinction between read-time, compile-time, and runtime. You can compile or run code while reading, read or run code while compiling, and read or compile code at runtime.

    Running code at read-time lets users reprogram Lisp's syntax; running code at compile-time is the basis of macros; compiling at runtime is the basis of Lisp's use as an extension language in programs like Emacs; and reading at runtime enables programs to communicate using s-expressions, an idea recently reinvented as XML.
When Lisp first appeared, these ideas were far removed from ordinary programming practice, which was dictated largely by the hardware available in the late 1950s. Over time, the default language, embodied in a succession of popular languages, has gradually evolved toward Lisp. Ideas 1-5 are now widespread. Number 6 is starting to appear in the mainstream. Python has a form of 7, though there doesn't seem to be any syntax for it.

As for number 8, this may be the most interesting of the lot. Ideas 8 and 9 only became part of Lisp by accident, because Steve Russell implemented something McCarthy had never intended to be implemented. And yet these ideas turn out to be responsible for both Lisp's strange appearance and its most distinctive features. Lisp looks strange not so much because it has a strange syntax as because it has no syntax; you express programs directly in the parse trees that get built behind the scenes when other languages are parsed, and these trees are made of lists, which are Lisp data structures.

Expressing the language in its own data structures turns out to be a very powerful feature. Ideas 8 and 9 together mean that you can write programs that write programs. That may sound like a bizarre idea, but it's an everyday thing in Lisp. The most common way to do it is with something called a macro.

The term "macro" does not mean in Lisp what it means in other languages. A Lisp macro can be anything from an abbreviation to a compiler for a new language. If you want to really understand Lisp, or just expand your programming horizons, I would learn more about macros.

Macros (in the Lisp sense) are still, as far as I know, unique to Lisp. This is partly because in order to have macros you probably have to make your language look as strange as Lisp. It may also be because if you do add that final increment of power, you can no longer claim to have invented a new language, but only a new dialect of Lisp.

I mention this mostly as a joke, but it is quite true. If you define a language that has car, cdr, cons, quote, cond, atom, eq, and a notation for functions expressed as lists, then you can build all the rest of Lisp out of it. That is in fact the defining quality of Lisp: it was in order to make this so that McCarthy gave Lisp the shape it has.
 
http://www.tbray.org/ongoing/When/200x/2009/09/30/C-dot-next-laundry-list

Tim Bray: "Functional programming .. Erlang is a decent gateway drug and Haskell is definitely the hard stuff 

There are a lot of ingredients that might or might not go into the winning formula that brings concurrent programming to the mainstream. This is a very brief run-through of as many as I can think of.

[This is part of the Concur.next series. At the moment, I think the next few pieces are going to be discussions of some or maybe all of the items in the list. If you’ve published something particularly gripping about one of them or another, shoot me a link.]

I’ll try to update this piece lots; I’m sure people will write in to disagree with my characterization and to argue for the addition or removal of the items from/to the list. Since this is an enumeration rather than an opinion piece, I have quite a bit of hope that it might come to represent community consensus.

In this discussion, I frequently refer to the HECS (Haskell, Erlang, Clojure, Scala) languages. I am not claiming that one of these is the winner or that there aren’t other worthwhile contenders. It’s just that they keep floating to the top and (I think) represent an instructive range of ways to aggregate the features in this laundry list.

Functional Programming · Hereinafter FP, please. The Wikipedia explanation is fairly impenetrable. Erlang is a decent gateway drug and Haskell is definitely the hard stuff. ¶

The proportion of a program which is composed of entirely side-effect-free function calls should by definition be arbitrarily parallelizable. That’s the theory, anyhow. Essentially every modern programming system that claims to address concurrency provides some FP capabilities.

Immutable Data · If your data objects are immutable, you can operate upon them freely and concurrently without fear of corruption or the need for locking. Plus you can use them as interprocess messages without actually copying them, or ship them between physical machines and know that the version you left behind is still current, whatever they do over there. It seems pretty obvious that this is a powerful tool for use in addressing concurrency problems. ¶

There are gradations of immutability. You can have immutable variables and do what feels like changing them if you auto-magically make a timestamped new version while preserving the validity of pointers to the old one. There are all sorts of data-structure tricks you can do, for example an “immutable” array where you append an element, logically producing an entirely new object but in fact re-using the pre-append part of the array.

This is a simple notion but very deep subject.

Processes and Actors · It’s axiomatic that we’re not going to expose threads in the operating-system sense. But it seems that programmers ought to be allowed to see some notion of a sequence of operations that proceeds in parallel with other sequences of operations. ¶

Erlang calls them processes, Scala calls them actors (there’s a big chunk of Computer-science theory at work there), Haskell doesn’t call them anything but the documentation bandies the term “Thread” about.

The way in which this is exposed to the programmer seems to me like a crucial defining characteristic in building a strategy to attack the concurrency problem.

In the context of this series, when I say “process” I’m using it more or less in the Erlang sense, as a sequence of execution with its own (logical) call stack and heap and so on, without asserting that its implementation is based on an OS process or thread, or that it is or isn’t an Actor in the formal sense.

Message Passing · If you’re going to expose processes and avoid global data, you need a way for them to communicate. There is a lot of semantic variations in the way different platforms do interprocess communication, but a superficial look suggests that some shared patterns are emerging; Scala, for example, consciously echoes Erlang in its approach. ¶

Typing · Here we have single greatest religious divide among programmers, and it cuts right across this space. Thank goodness none of the candidates are weakly typed a la Perl. Erlang has unsurprising dynamic typing but no objects. Scala has inferred static typing applied to Java-flavored classes/objects/methods. ¶

Haskell has stronger and more elaborate (static) typing than anything I’ve ever been near; in fact they push a lot of what would normally be considered an application programmer’s work down into the type system. They have things called classes (OK, typeclasses), but they’re a horse of a different color.

It’s not obvious to me that the choice of type system is that important in building the Java of concurrency, but I could easily be wrong on that.

Virtual Machine · Is it an advantage to have your own virtual machine, like Erlang, to be based on another like Clojure and Scala, or just to compile to native code like Haskell? And the JVM in particular brings along a mind-bogglingly huge amount of existing software and libraries you can call out to. Well, except for a large proportion either isn’t optimized for concurrency or just won’t work that way at all. ¶

The right answer here isn’t obvious at all.

Transactional Memory · Since nobody’s ever shipped hardware transactional memory, we’re talking STM here; Clojure in particular makes a big deal of this. ¶

The core idea is sort of like applying ACID database semantics in accessing program variables. The effect is that you can mutate things concurrently where you need to in the context of a transaction; if a collision occurs, the whole transaction is rolled back and inconsistency doesn’t arise.

Tuple Space · The Wikipedia Tuple space article is perfectly OK. I can remember like yesterday back in the Eighties, reading about Linda and thinking this had to be the future. ¶

Maybe so, but it’s sure not the present; I’ve never actually had my hands on a deployed piece of software that relied on a tuple space. And at the moment, I don’t hear anyone claiming this is the way to build the Java of concurrency.

Dataflow · But, like tuple spaces, Dataflow is an idea that looks like it ought to be real useful in building concurrent systems. Concretely, you ought to be able to farm out the recalculation of a big complex spreadsheet to a bunch of processors until you get to the last-step sum computations. ¶

Like tuple spaces, I don’t see anyone trying to build the concurrent future with dataflow techniques.

Reliability · This is only here because of Erlang, which claims to have two design goals: First, to enable concurrent computation, and second, to make such computation reliable in the face of software and hardware errors. In order to accomplish this in a process-based paradigm, it wants to handle all errors simply by blowing up the process where they happened; then there’s a really slick system of local and remote process monitors that should enable you to keep your system on the air in the face of a whole lot of different classes of problems. ¶

The thing is, once you’ve worked with Erlang a little bit, the notion of trying to deliver any concurrent system without those sorts of monitors and failovers and so on begins to be seriously worrying. My personal bet is that whatever we end up with is going to have to have a good story to tell in this space.

Language or Library? · This is a big one. Do we need a whole new platform? A new language on an existing VM? Or maybe even just a set of libraries you can use from existing languages? ¶

You can do massively parallel computing right now today, from FORTRAN forsooth, using MPI. There are more modern approaches including MapReduce and Hadoop.

There are Actor libraries that I know of for Java, Ruby, and probably essentially every other modern language. And nobody’s forcing you to make your variables mutable or to share data between threads. Or even to use threads; in my own Wide Finder project (I, II), there was no real evidence that thread-level concurrency outperformed processes.

These days, given the choice, I prefer to code in either Ruby or Python. I’d love to be able to keep as much of that lightweight dynamic enabling goodness as possible and still drink from the fountain of concurrency.

Distribution · Which is to say, can your concurrent application code run across multiple physically separated computer systems, as opposed to just the cores on one system? If so, is it automatic or under the programmer’s control. Erlang makes this explicit, as a reliability mechanism, but for problems that get really large in scale, this could easily become a gating limit on performance. ¶

What’s Missing? · From this laundry list, I mean. Or what’s really wrong and misleading. Or should any of these be cast into the outer darkness? ¶

 

https://fsharpforfunandprofit.com/posts/fsharp-is-the-best-enterprise-language/

Why F# is the best enterprise language

This post is part of the 2018 F# Advent Calendar. Check out all the other great posts there! And special thanks to Sergey Tihon for organizing this.

“Why F# is the best enterprise language” is not meant to be a clickbait title – it is my sincere opinion, and in this post I will attempt to justify it. If you stick around to the end, I hope that you will agree, or at least be a little bit persuaded. Read on!

Just to be clear, I’m only going to be talking about so-called “enterprise development”. I’m not claiming that F# is the best for systems programming, or games development, or hobby projects. What’s appropriate for one kind of development objective may well be inappropriate for another. Enterprise development has its own constraints and demands, which I think F# is particularly well suited for.

I’ll start with an important caveat, which is that I don’t think that the success or failure of a project depends on using a particular programming language. Much more critical are things like good communication, clear requirements, caring about the user experience, realistic expectations, and so on. If the programming language really was that important, then there would be no successful companies using PHP, Visual Basic, or JavaScript, and all jobs would require Haskell and Lisp skills!

Nevertheless, having said that, I do think that the choice of programming language has an effect on productivity, maintainability, and stability, and that’s what I’m going to talk about in this post.

Of course, it’s easy to prove an assertion like “F# is the best enterprise language” – all I need to do is choose from one of the numerous longitudinal studies on enterprise software projects; or failing that, one of the many controlled experiments which involve large numbers of experienced developers.

Hahaha. Of course, there is no such thing. For a trillion dollar industry, it’s shocking that we generally make our decisions using not much more than anecdotes, outdated myths and gut feelings.**

** Yes I know about neverworkintheory.org and evidencebasedse.com but I stand by my point.

So I don’t have any hard evidence, alas, but I will at least try to present a well reasoned argument! I’ll present my premises and then my conclusion. If you agree with my premises, I hope that you will at least take my conclusion seriously.

The characteristics of Enterprise Development

So what are some the key characteristics of “enterprise” development?

Software development is not the focus of the business

In an “enterprise”, software is generally treated as a tool; a cost center rather than a profit center. There is no pressure to have the latest technology, or to hire the best developers, or (sadly) to invest in training.

This means that being “enterprise” has nothing to do with the size of the business. By my definition, Google does not do enterprise development, while a 50-person B2B company probably does.

This also means that companies that develop in-house software to gain a competitive advantage, like FinTech companies, don’t count as “enterprise” either.

Projects are business-centric rather than technology-centric

The goal of enterprise development is generally to support business workflows rather than to implement a specific set of technical requirements. At the most basic level, typical enterprise software just moves data around and transforms it. This sounds trivial and is often looked down on as not “real programming”.

But business workflows involve humans, and any time humans are involved you will always have complexity. Implementing an efficient map/reduce algorithm or optimizing a graphics shader might be tricky, but possibly not as tricky as some business workflows! This 30-year old quote about COBOL sums it up well:

The bias against the problem domain is stated explicitly in [a] programming language textbook, which says that COBOL has “an orientation toward business data processing . . . in which the problems are . . . relatively simple algorithms coupled with high-volume input-output (e.g. computing the payroll for a large organization).”

Anyone who has written a serious payroll program would hardly characterize it as “relatively simple.” I believe that computer scientists have simply not been exposed to the complexity of many business data processing tasks. Computer scientists may also find it difficult to provide elegant theories for the annoying and pervasive complexities of many realistic data processing applications and therefore reject them.

Ben Shneiderman, 1985

Sadly, enterprise development has never been sexy.

Enterprise projects often have a long life

It’s not unique to enterprise development, of course, but it’s common that enterprise software projects live a long time (if they survive childhood). Many projects last five years or more – I am personally familiar with one that started in the 1970’s – and over the lifetime of a project, many developers will be involved. This has a couple of corollaries:

There is a very interesting talk by Robert Smallshire in which he simulates code generation for different size teams over different time periods. So, for example, after five years, the current team will generally only have contributed 37% of the code.

For a bigger team over a longer period, the contribution % can drop even lower.

Yes, these are simulations, but they ring true in my experience.

Enterprise project managers have a low tolerance for risk

As a result of all these factors, project managers tend to be risk averse and are rarely early adopters – why break something that’s already working?

As the saying goes “process is the scar tissue of organizations”. Stability is more important than efficiency.

However, new environmental conditions occasionally arise which force change on even the most conservative businesses. For example, the newfangled “intranet” and “internet” in the 1990’s scared a lot of people and had a lot to do with the rise of Java and VisualBasic/ActiveX. Here’s what the hype looked like back then:

Less than 10 years after those articles were published, the dominant enterprise programming languages had indeed changed to Java and C#.

Thanks to mobile apps and the rise of the cloud, I think we’re in the middle of another era like this, where enterprises are willing to risk new technologies so as not to get left behind. The challenge of course, is how to adopt new technologies without major disruption.

What is important when choosing an enterprise language?

So how does all this affect choosing a programming language and its associated ecosystem, from a project manager’s point of view?

It should be enterprise-friendly

A project manager is not just choosing a programming language, they’re also committing to the ecosystem around the language, and the future support for that ecosystem. As noted above, enterprise development is not about being on the bleeding edge. Rather, if the ecosystem has support from an enterprise-friendly company like Microsoft, Oracle or Google, that is a big plus.

Also, from the enterprise manager’s point of view, it’s critical that the language and its ecosystem have deep support for enterprise databases (Oracle, Sql Server), enterprise web servers, enterprise authentication (AD, LDAP), enterprise data formats (XML) etc. It’s unlikely that support for the latest hotness will be their primary concern.

It should be future-proof

Given the longevity of enterprise projects, we want to make sure that the ecosystem and tooling will still be around and supported in, say, 10 years. If and when new platforms come along, you shouldn’t have to throw away all your code.

It should be flexible

And if you’re going to commit to an ecosystem, you’d ideally want to use it in as many different situations as possible (e.g. desktop apps, server apps, web apps) and different target platforms (Windows, Mac, Linux, mobile, etc).

It should make maintenance easy

Since the members of the team will probably rotate over the lifetime of the project, and most code will not be written by the current team, the dominant concerns are things like:

Choosing an enterprise language, part 1

With these requirements in place, we can use them to reduce our language choices.

So, far no surprises. We have come up with the usual suspects, Java and C#.

If this was 2008, we’d be done. But it isn’t, and we’re not. In the last decade, there has been an explosion of new languages which are strong contenders to be better enterprise languages than C# and Java. Let’s look at why.

The rise of functional programming

Functional programming is the new hotness right now, but regardless of the hype, most modern programming languages are introducing FP-friendly features that make a big difference to software quality:

If we look at languages which support these features, we end up with the mainstream statically-typed FP languages (Haskell, F#, OCaml) and the more modern FP-influenced languages: Swift, Scala, Kotlin, Rust, TypeScript, etc.

As I said above, the rise of new technologies such as serverless means that enterprises will be willing to switch to these FP-influenced languages if they can provide a competitive advantage (which I think they do) and if the switch can be made with minimal disruption (which depends on the choice of language).

The danger of too much abstraction

Some FP languages (Haskell and Scala in particular) support some features that allow high levels of abstraction. Some people like to quote Dijkstra here:

“The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise” – E.W. Dijkstra

That’s great, but I believe that in the specific context of enterprise development, too much abstraction can cause problems. If used too freely, it requires that all developers working on a project need to have the same understanding of the “new semantic level”, which is a burden on training and employability. All it takes is for one person to have too much fun with category theory in the code, and the code is rendered unmaintainable for everyone else.

That is, just as you can shoot yourself in the foot with low-level features, you can also shoot yourself in the foot with high-level features as well. For an enterprise language, we need to trim the top-end of the language capabilities as well as the bottom-end, and encourage an “only one way to do it” approach as much as possible.**

So I’m going to penalize Haskell and Scala at this point for being too easy to abuse.

** One of reasons people like Go or Elm as languages is because they are restrictive. There is a standard way of doing things, which in turn means that reading and maintaining someone else’s code is straightforward.

But how much abstraction is too much?

Are generics too advanced? 15 years ago, perhaps. But today it’s clear that it’s a mainstream feature. (The golang designers disagree!)

But how about lambdas? How about monads? I think that most FP concepts are on the verge of being mainstream now, and in ten years time will be commonly accepted, so it’s not unreasonable to have a language that supports them.

For me, in 2018, the “just-right” level of abstraction is that found in ML languages like OCaml and F#. In 10 years time things may be different, and we may be able to adjust the acceptable level of abstraction upwards.

However, I’m not convinced that more abstract, mathematical style programming (a la Idris, Coq) will ever be commonplace in the enterprise, due to the variation in employee skills. Yes, this could be solved with better training, or a certified-software-engineer qualification, but I’m not holding my breath.

Choosing an enterprise language, part 2

If we then filter these newer languages by the “enterprise” criteria above we end up with the FP-influenced languages that support .NET and the JVM, namely:

To summarize the “why not language X” objections again:

What about higher-kinded types? What about type classes? What about GADTs?

Oh dear, none of the three finalists support them right now. I’ll let you judge whether this is a deal-breaker for enterprise development.

Picking a favorite

The three languages left (F#, Kotlin and TypeScript) are all good choices, they’re all open-source, cross platform, and enterprise friendly.

If you’re already using the JVM, then obviously Kotlin provides the best migration path. Similarly, if you’re using Node on the backend, then TypeScript is good (although trusting npm packages might be a problem).

But if you’re doing greenfield development (or if you are already on .NET) I believe that F# has the edge (and this is where I might be a bit biased!)

Of course, Kotlin can do some of these things and TypeScript some of the others, but I think that F# has the most breadth overall.

So there you go, that’s my conclusion! Feel free to disagree in the comments!

By the way, if you’re interesting in learning more about F#, check out the rest of the 2018 F# Advent Calendar, or if you like videos, here are some good ones that demonstrate its versatility:


And if you are interested in the functional approach to domain modeling and design, here's my "Domain Modeling Made Functional" book! It's a beginner-friendly introduction that covers Domain Driven Design, modeling with types, and functional programming.



 https://www.christianfindlay.com/blog/immutability-dart-vs-fsharp

Immutability: Dart vs. F#

05 Nov 2022 By Christian Findlay

Try ioc_container!

A lightweight, flexible, and high-performance dependency injection and service location library for Dart and Flutter. It's production ready for your Flutter app

Immutability is a very important part of Functional Programming. Dart and F# are two excellent modern languages that support immutability and functional programming constructs. However, Don Syme and the team designed F# explicitly for functional programming constructs. It is a “functional-first” language. Immutability is also an important part of Flutter, and there are now F# bindings for Flutter via the Fable compiler. So, this article compares immutability in the two languages and explores the two different approaches. 

Why Immutability?

Immutability facilitates pure functions by disallowing mutable parameters to functions. Pure functions are simpler to test and maintain because they provide certain guarantees. We know that a pure function cannot modify anything about the inputs, there are no side effects, and the result will always be identical given that the inputs are identical. 

Structural equality allows a language to compare two immutable objects based on their contents instead of their object references. For example, these two objects are structurally equal but not referentially equal.

Structurally Equal

The Flutter community has embraced immutability. Some patterns and libraries make immutability and structural equality a requirement. According to the flutter_bloc documentation:

The selected value must be immutable in order for BlocSelector to accurately determine whether builder should be called again.

So, we need to ask how well the Dart language supports immutability and whether or not there are any caveats or pitfalls we should consider when using immutability. 

Requirements of Immutability

Firstly, we must discuss what a language needs to support immutability properly. This list is not exhaustive but gives you an idea of how the language should behave. The important thing to understand is that immutable types come with a contract. They should not allow you to change anything about the object unless you go out of your way to use a backdoor like reflection.

Compile Time and Runtime Safety

The language should stop changing anything about the object at runtime, but it should also stop you at compile time. The compiler should give you an error if you attempt to modify anything about the object. Immutable types should not have members that mutate the object. 

Immutable Collections

This includes modifying collections. See this article on Dart Immutable Collections. The language needs specialized types for immutable lists. Interfaces are not enough because they do not specify behavior. The type must prevent mutation. Again, runtime safety is not enough. Collections must have compile time safety. An immutable collection with no compile-time safety may be more dangerous than a mutable collection because the compiler won’t stop errors before they appear in your app.

Structural Equality

Immutability and structural equality go hand in hand. If a type is truly immutable, it is possible to accurately compare structural equality between two objects. While this is not a strict requirement for immutability, it comes with immutability. 

However, structural equality requires automation. If the onus is on the programmer to compare all fields on the object, they will make mistakes. The language must automate the comparison somehow.

Recursive Contract

Objects are graphs. They are not flat. For a type to be truly immutable, the field types need to be immutable, and their fields need to be immutable. If a type has a mutable field collection, this breaks the contract. 

How Do F# and Dart Deal With These?

There are many ways for a language to provide these characteristics. The language doesn’t have to bake these things in. We can create tools and frameworks on top, but F# has a concept called record. Records are immutable by default. At this time, Dart has a specification for record types, but we currently handle immutability differently in Dart.

This is an example of a record in F#

type Person =
{ FirstName: string
LastName: string;
Age: int
Numbers: List<int>; }
view raw Person.fs hosted with ❤ by GitHub

This is a similar class in Dart that has some immutability features out of the box.

class Person {
Person(
this.firstName,
this.lastName,
this.age,
this.numbers,
);

final String firstName;
final String lastName;
final int age;
final List<int> numbers;
}
view raw Person.dart hosted with ❤ by GitHub

But the results are very different. The first thing you should notice about the F# version is that the Numbers list is immutable at compile time. We don’t even have add or remove methods to modify the list. F# satisfies the safety requirements for records and collections already.

No add method

The list on the Dart version is completely mutable. We can modify it at compile time and runtime. This code runs correctly

Can add to list

How about structural equality? Well, F# does this out of the box. This comparison returns true because all the values in the record match.

F# Structural Equality

The Dart version does not have structural equality by default. If we use the tool dnSpy, we can see what the F# code looks like as C# code. 

F# Structural Equality

And by default, all records are immutable in F#, so no matter how many fields we add to the type graph, we have recursive immutability by default in F#. Flutter uses the immutable annotation to specify the immutable type contract. Incidentally, Dart doesn’t have this annotation out of the box. But, this only forces us to use the final keyword on classes. The analyzer does not recursively check that all fields are immutable and all fields of fields are immutable. It doesn’t check that collections are immutable. 

There is one gotcha with F#, though. F# does allow mutable fields, so it is possible to break the immutability contract. This code runs, and it’s still considered a record. But, these object instances are unequal because the F# type system knows that classes are mutable.

F# Mutable

So, we see that Dart does not have the same automatic immutability qualities that F# does, but not even F# is perfect. We can still use tools to fill the gaps in Dart.

Dart: Filling The Gaps

As mentioned, Dart doesn’t have compile-time immutable lists by default. The fixed_collections package offers a good solution by deprecating members that mutate the list. You can use the unmodifiable constructor of List<> to create an immutable list, but this does not provide compile-time safety.

We can add structural equality with the equatable package. But, this package requires us to specify the properties for structural comparison. There is no automation, so we can easily make mistakes and introduce very subtle bugs. If we add a field to a type and forget to add it to the props, the comparison will not work correctly. This can be a horrendously difficult problem to problem to pinpoint.

The freezed package may offer a better solution. If we define our type like this, it generates code useful for structural equality.

//Manually Entered Code
@freezed
class Person with _$Person {
const factory Person({
required String firstName,
required String lastName,
required List<int> numbers,
required int age,
}) = _Person;

factory Person.fromJson(Map<String, Object?> json) => _$PersonFromJson(json);
}

//Some of the generated code from the tool
class _$_Person implements _Person {
const _$_Person(
{required this.firstName,
required this.lastName,
required final List<int> numbers,
required this.age})
: _numbers = numbers;

factory _$_Person.fromJson(Map<String, dynamic> json) =>
_$$_PersonFromJson(json);

@override
final String firstName;
@override
final String lastName;
final List<int> _numbers;
@override
List<int> get numbers {
// ignore: implicit_dynamic_type
return EqualUnmodifiableListView(_numbers);
}

@override
final int age;

@override
String toString() {
return 'Person(firstName: $firstName, lastName: $lastName, numbers: $numbers, age: $age)';
}

@override
bool operator ==(dynamic other) {
return identical(this, other) ||
(other.runtimeType == runtimeType &&
other is _$_Person &&
(identical(other.firstName, firstName) ||
other.firstName == firstName) &&
(identical(other.lastName, lastName) ||
other.lastName == lastName) &&
const DeepCollectionEquality().equals(other._numbers, _numbers) &&
(identical(other.age, age) || other.age == age));
}

@JsonKey(ignore: true)
@override
int get hashCode => Object.hash(runtimeType, firstName, lastName,
const DeepCollectionEquality().hash(_numbers), age);

@JsonKey(ignore: true)
@override
@pragma('vm:prefer-inline')
_$$_PersonCopyWith<_$_Person> get copyWith =>
__$$_PersonCopyWithImpl<_$_Person>(this, _$identity);

@override
Map<String, dynamic> toJson() {
return _$$_PersonToJson(
this,
);
}

Unfortunately, there is still an issue. It’s possible to change the collection from outside the freezed type. We can run this and the list gets modified. Thanks to Alessio Salvadorini for pointing this one out.

Modify List

The other issue is that by default freezed does not give us compile-time collection safety. This example causes a runtime error, but the compiler doesn’t catch the error.

Runtime Exception

You can add compile-time safety to your Lists, Sets and Maps with the fixed_collections package.

Fixed Collections

In order to see compilation errors, you must add the deprecated_member_use code analysis option.

Deprecated Member Use

Deprecated Add

One caveat here, is that at the time of writing, I was not able to get this working with json_serializable. I got errors that I could not fix on code generation. If you know how to do this, please reach out to me on Twitter.

There are also other custom immutable collection libraries that you can use such as kt_dart, built_collection and fast_immutable_collections. Just be aware that these collections don’t implement the List<>, Set<> and Map<> interfaces, so you may need to do conversion in some parts of your code.

Lastly, if you use any tool that generates source code, you need to configure the pipelines to regenerate the code on every build. Otherwise, you may forget to generate the code.

Wrap Up

It’s not surprising that F# has first-class support for immutability, while Dart lacks some features. Dart is a pragmatic language that aims at broad uptake and doesn’t take a purist approach to functional programming. Still, we can use tooling to fill the gaps in Dart, and Dart records are a promising addition to the language. Static metaprogramming will probably make the automation of things like structural equality easier.

The takeaway from this article is that immutability is not simple, and we shouldn’t treat it as such. We shouldn’t use Dart/Flutter constructs that require structural equality and immutability unless we are willing and have time to implement immutability properly in our projects. Even then, there are some easy ways to break immutability.

For this reason, I suggest a rethink of the need for immutable state in all scenarios. The Flutter documentation explicitly uses mutable state with StatefulWidgets and in the Simple app state management example. While immutability is preferable, you don’t have to implement it in every part of every app. You need to weigh up the pros and cons of your scenario. 

F# is the clear winner for immutability, which may make it a great option for building flutter apps in the future. Still, Dart records are on the way, and I totally expect that immutability will become a first-class citizen in Dart before long.

At FOSDEM all they talk about are containers ..

Containers

Who Am I?

- Me -
Voornaam Nikolai
Naam Domaev
Adres Wilrijk
Nat. Rus/Belg
Cont.: nik dot kel at proximus dot be
- Edu -
- Recent - - Past -
2015 - LPI Linux Essentials Graduaat Informatica 50%
2014 - LFS 101x Netwerk-Assistent 2008-9
- Introduction to Linux - Computer Technicus 2008-10
- edX Linux Foundation Course A2 diploma Carroserrie 2004-7
Toegpaste Informatica 2002-4
- Pro - - Rest -
Universiteit Aantwerpen Xylos 2008-9
(sinds 2015 - SCCM) Carrosserie Cryns 2007-8
Artesis 2010-2015 Martinique 2006-7
(Altiris +FOG +OCS +GLPI)
- Skills - - Other Interests -
Open Surce @ Work DRP.su / DISM
FOG - Free Open Ghost Play On Linux / Crossover
OCS - Open Computer Inventory RFRemix / ReactOS
iTALC - intelegent teaching - - Translation tools -
- and learning with computers SDL Trados Studio
- Deployment - WinCaps, Swift Subtitling
SCCM - System Center - Languages -
Altiris - Deployment Solution Russian, English, Dutch
http://home.scarlet.be/spb30297/index.html
.
Linux Sites i read:
http://lwn.net/
http://distrowatch.com/
http://slashdot.org/
http://www.xakep.ru/
http://hackaday.com/
http://liliputing.com/
https://www.linux.org.ru/
http://www.phoronix.com
http://www.linuxtoday.com/
http://www.osnews.com/


LiFo
LPIc



work hardware work software best books chronologically Containers evo