In the past month, I have been learning and writing in Haskell. My journey in Haskell began back in my junior year of highschool in 2020. Recently, I decided to pick it up again, and I want to reflect on the core concepts that I learned.
Note that this article glosses over important functional programming concepts or syntax. Instead, I chose to focus on the ideas that interested me most.
Lambda Calculus
A crucial idea to notice is that Haskell is largely inspired by ideas from lambda calculus, which is an alternative to Turing machines. A lambda function is written like where the lambda denotes the declaration of a function and the period separates its parameters from its outputs.
Here is an example of how lambda calculus shares parallels with Haskell. In lambda calculus, in order to define functions with multiple parameters, we nest lambdas together.
Now, here is how we write the signature of a function that adds two integers:
add :: Int -> (Int -> Int)
In practice, the parenthesis is omitted. As we see, functional programmers see functions with multiple parameters as syntactic sugar for multiple parameters. Concepts like partial application may seem alien on first impression, but natural for someone who came from a background in lambda calculus.
Types
The type system in Haskell is one of the most interesting and expressive parts of Haskell. First, Haskell uses algebraic data types which are extremely expressive. Here is how you define a linked list in Haskell,
data List a = Nil | Cons a (List a)
Here is how you define a binary tree,
data BinaryTree a = Nil | Node (BinaryTree a) a (BinaryTree a)
You can see it is elegantly succinct. Algebraic data types are especially useful in programs that have complex, recursive structures, such as compilers.
Type classes represent another cornerstone of Haskell's design, offering an elegant approach to polymorphism. They are similar to abstract classes in traditional object-orientated languages.
class Eq a where
(==) :: a -> a -> Bool
In this example, any datatype that defines the (==) operator is an Eq type, without needing to explicitly declare that it is. In addition, functions can specify constraints that force their parameter to be instances of some particular type classes.
Monads
Monads are actually a straightforward concept if category theory is not involved. Monads are really just the pipeline design pattern. The most essential function that Monads define is the bind operator (>>=) which is equivalent to something like `Promise.then` in JavaScript.
readFile "in.txt" >>= print
`readFile` reads the contents of a file and returns it as a string. However, since IO operations sometimes fail, it instead returns a string wrapped in an IO monadic context. The bind operator will run print with the output of the `readFile` call if it succeeds. If it does not, then it throws an error. This pipeline design pattern makes it easier to deal with real-world data systems where operations may fail.
Monads provide the abstraction for Haskell to provide syntactic sugar for imperative behavior via `do` statements.
Lazy evaluation
fibs = 0 : 1 : [ a + b | (a, b) <- zip fibs (tail fibs)]
Perhaps Haskell's most distinctive feature is its lazy evaluation. The code above defines the Fibonacci sequence. Notice that `fibs` uses itself in the definition. This is only possible because of lazy evaluation. Second, the list is infinite. Computation is only done on `fibs` when something forces evaluation, like indexing.
Practicality
Although Haskell was an amazing language that expanded my horizons, I was forced to question its practicality when I was using it. The benefits of Haskell is that functions can be easily reused and composed because of partial application.
Haskell programs are divided into two layers. The outer layer, which contains network requests, user inputs, and IO requests, are unpredictable and error prone, and require error handling. The inner layer is composed of pure functions that do not have side effects, and contain code for business logic and data manipulation.
In the real world, however, many aspects of the language make it impractical.
- Complexity: Despite what Haskellers says, excessive complexity can impede productivity. A Haskell program encourages separating functions into many composable parts and recombining them. Although this strategy can occasionally be beneficial, the issue is that Haskell takes this approach to the extreme and can make the codebase difficult to navigate and harder to manage.
- Design: In practice, business logic naturally gravitates toward imperative patterns, and embracing mutable state can often eliminate convoluted recursive structures. Haskell supports both imperative and mutable variables via monads, but it is unnecessarily complicated compared to a language like Python.
- Efficiency: A declarative language is double-sided. It is widely used in query languages because it hides complexity and gives a chance for the compiler to speed up commonly used operations. Nevertheless, for a general-purpose language, there are two significant drawbacks. One, users are forced to rely that the compiler can optimize the code. Second, runtime complexity is ambiguous, which is something most software engineers cannot compromise on.
- Debugging: It is difficult to debug a Haskell program because of its lack of support for a proper error system.
As much as I loved learning and writing Haskell, I have to admit that its place in the industry is limited due to its uncompromising nature for purity. Nonetheless, it has served as an inspiration for many languages we use today and solidified its place in academia.