Is Common Lisp suitable for simple numerical computations?

Main stream imperative programming languages like C/C++ and Fortran are most of the time the preferred choice for number crunching applications. Moving to more evolved languages in the last year, I want to evaluate how good they are for that task. This is not an exhaustive comparaison but more a pretext to practice my Common Lisp and OCaml skills.

In his article, Didier Verna compare Common Lisp and C on basic image processing tasks and conclude that Lisp can be faster than C. I think the choice of the tasks for the benchmark in that article could lead to no other conclusion: memory bound computation cannot distinguish language performance. I decided to reproduce the same benchmark including OCaml and see what I can learn in the process.


The benchmark compares the performance of C++, Common Lisp and OCaml on the DAXPY BLAS Level 1 operation:

z = a*x + y

where x, y and z are double precision float arrays and a is a scalar double precision float.

While I usually prefer single precision floating point number for benchmark, OCaml only supports double precision float. This computation is memory bound so I should find no real difference in performance between the 3 languages.

The test is run 10 times for each size from 10 to 100,000,000 to reduce the influence of tasks running on the same computer.


The full code is located on github.


template<typename T>
void saxpy(std::vector<T> &z, const T a, const std::vector<T> &x, const std::vector<T> &y) {
    for (size_t i=0; i<x.size(); ++i) {
         z[i] = a*x[i] + y[i];

Clang does a good job at auto vectorization. Here is the relevant part of the assembly code:

LBB10_6:                                ## %vector.body
                                        ## =>This Inner Loop Header: Depth=1
	movupd	-16(%rsi), %xmm2
	movupd	(%rsi), %xmm3
	mulpd	%xmm1, %xmm2
	mulpd	%xmm1, %xmm3
	movupd	-16(%rdx), %xmm4
	movupd	(%rdx), %xmm5
	addpd	%xmm2, %xmm4
	addpd	%xmm3, %xmm5
	movupd	%xmm4, -16(%rdi)
	movupd	%xmm5, (%rdi)
	addq	$32, %rdi
	addq	$32, %rdx
	addq	$32, %rsi
	addq	$-4, %rbx
	jne	LBB10_6


let daxpy z a x y =
  for i = 0 to Array.length x - 1 do
    Array.set z i (a *. Array.get x i +. Array.get y i)

Common Lisp

(defun daxpy (z a x y)
  "Compute BLAS Level 1 DAXPY operation: zi = a * xi + yi with map-inot CL procedure"
  (declare (type double-float a))
  (declare (type (simple-array double-float (*))
                 z x y))
  (declare (optimize (speed 3)
                     (compilation-speed 0)
                     (safety 0)
                     (debug 0)))
  (let ((f (lambda (xi yi)
             (+ (* a xi) yi))))
    (map-into z f x y)))

It looks like SBCL doesn’t know about auto vectorization as shown with (disassemble #’daxpy):

; disassembly for DAXPY
; Size: 109 bytes. Origin: #x100308BC4F
; 4F:       4C8B46F9         MOV R8, [RSI-7]                  ; no-arg-parsing entry point
; 53:       498BC0           MOV RAX, R8
; 56:       4C8B4BF9         MOV R9, [RBX-7]
; 5A:       498BF9           MOV RDI, R9
; 5D:       4C8B51F9         MOV R10, [RCX-7]
; 61:       4D8BC2           MOV R8, R10
; 64:       4D8BD0           MOV R10, R8
; 67:       4C8BCF           MOV R9, RDI
; 6A:       4C39C7           CMP RDI, R8
; 6D:       4D0F4FCA         CMOVNLE R9, R10
; 71:       498BF9           MOV RDI, R9
; 74:       4C8BCF           MOV R9, RDI
; 77:       4C8BC0           MOV R8, RAX
; 7A:       4839F8           CMP RAX, RDI
; 7D:       4D0F4FC1         CMOVNLE R8, R9
; 81:       31C0             XOR EAX, EAX
; 83:       EB29             JMP L1
; 85:       660F1F840000000000 NOP
; 8E:       6690             NOP
; 90: L0:   F20F104C8601     MOVSD XMM1, [RSI+RAX*4+1]
; 96:       F20F10548301     MOVSD XMM2, [RBX+RAX*4+1]
; 9C:       F20F59CB         MULSD XMM1, XMM3
; A0:       F20F58CA         ADDSD XMM1, XMM2
; A4:       F20F114C8101     MOVSD [RCX+RAX*4+1], XMM1
; AA:       4883C002         ADD RAX, 2
; AE: L1:   4C39C0           CMP RAX, R8
; B1:       7CDD             JL L0
; B3:       488BD1           MOV RDX, RCX
; B6:       488BE5           MOV RSP, RBP
; B9:       F8               CLC
; BA:       5D               POP RBP
; BB:       C3               RET


On OS X:

Size C++ OCaml Common Lisp
1,000,000 3.05 2.87 3.1
10,000,000 23.52 25.56 24.8
100,000,000 237.17 264.5 253

The way the measure is done it is hard to say for sure but on OS X, OCaml and SBCL Common Lisp seams to run slower than C++ by a small fraction but I couldn’t find a reason for that.

On Linux (different computer):

Size C++ OCaml Common Lisp
1,000,000 1.82 2.12 1.89
10,000,000 18.22 18.79 18.19
100,000,000 181.73 185.36 183.9

On Linux, the 3 languages runs the same as expected.

Is it really memory bound?

If we increase the arithmetic operation count without increasing the memory access the code runs for the exact same time on all 3 languages. This is proof memory bandwidth is the bottleneck in this benchmark and that all 3 tested languages handle the arithmetic operation increase correctly.


This micro benchmark is a good indication high level language can be used for numerical computations. I find the lack of single precision in OCaml annoying because all computation doesn’t need that kind of precision.

This is old news, OCaml and Common Lisp have been used in very intensive tasks for a long time. I think in the near future, these languages will become more of a main stream choice for numerical computations just like python is more and more used by the scientific community.