Common Lisp Compiling and execution time

瘦欲@ 提交于 2019-12-03 12:52:23

Typical arithmetic stuff in Common Lisp can be slow. Improving it is possible, but needs a bit of knowledge.

Reasons:

  • Common Lisp numbers are not what the machine provides (bignums, rational, complex, ...)
  • automatic change from fixnum to bignum and back
  • generic math operations
  • tagging uses bits of the word size
  • consing of numbers

One thing you can see from the profiling output is that you generate 1.7 GB garbage. This is a typical hint that your number operations cons. To get rid of that is often not that easy. It is just a guess on my side, that these are number operations.

Ken Anderson (unfortunately he died a few years ago) has some advice on his web site for improving numeric software: http://openmap.bbn.com/~kanderso/performance/

A usual solution is to give the code to some experienced Lisp developer which knows a bit about the compiler used and/or optimizations.

First of all, never ever declare (speed 3) together with (safety 0) at the top-level, i.e. globally. This will sooner or later come back and bite off your head. At these levels, most common lisp compilers do less safety checking than C-compilers. For instnace, some lisps drop checking for interrupt signals in (safety 0) code. Next, (safety 0) very rarely give noticeable gains. I would declare (speed 3)(safety 1)(debug 1) in hot functions, possibly going to (debug 0) if this brings a noticeable gain.

Otherwise, without actually looking at some of the actual code, it's kinda hard to come up with suggestions. Looking from time() it seems like the GC-pressure is kinda high. Make sure that you use open-coded arithmetic in hot functions, and don't needlessly box floats or ints. Use (disassemble 'my-expensive-function) to take a close look at the code the compiler generated. SBCL will provide lots of helpful output when compiling with high priority on speed and it can be worthwhile to try to eliminate some of these warnings.

It is also important that you use a fast data structure for representing particles, using instantiatiable arrays and macrology if needed.

If all code contained in "myfile.lisp" is the parts where you do computations, no, compiling that file will not noticeably improve your run-time. The difference between the two cases is probably going to be "we compile a few loops", calling functions that are either compiled or interpreted in both cases.

To gain improvement from compiling, you need to also compile the code that is called. You may also need to type-annotate your code, in order to allow your compiler to optimize better. SBCL has quite good compiler diagnostics for missed annotations (to the point that people complain it is too verbose while compiling).

As far as the loading-time goes, it may actually be the case that loading a compiled file takes longer (there's a whole slew of essentially-dynamic-linking happening, if you're not changing your code frequently, but changing the data you process, it may be an advantage to prepare a new core file with your particle-filtering code already in-core).

Several points:

  1. Try to move file I/O out of the loop if possible; read data into memory in batch before iteration. File I/O is several magnitudes slower than memory access.

  2. Try SBCL if execution speed is important for you.

  3. Ten times increase in your input results in roughly ten times execution time increase, which is linear, so your algorithm seems fine; just need to work on your constant factor.

  4. Take advantage of Lisp work flow: edit function, compile function and test run instead of edit file, compile file and test. The difference will become pronounced when your projects get bigger (or when you try SBCL, which will take longer to analyze/optimize your programs to produce faster code).

Welcome to Clozure Common Lisp Version 1.7-r14925M  (DarwinX8664)!
? (inspect 'print)
[0]     PRINT
[1]     Type: SYMBOL
[2]     Class: #<BUILT-IN-CLASS SYMBOL>
        Function
[3]     EXTERNAL in package: #<Package "COMMON-LISP">
[4]     Print name: "PRINT"
[5]     Value: #<Unbound>
[6]     Function: #<Compiled-function PRINT #x30000011C9DF>
[7]     Arglist: (CCL::OBJECT &OPTIONAL STREAM)
[8]     Plist: NIL

Inspect> (defun test (x) (+ x 1))
TEST
Inspect> (inspect 'test)
[0]     TEST
[1]     Type: SYMBOL
[2]     Class: #<BUILT-IN-CLASS SYMBOL>
        Function
[3]     INTERNAL in package: #<Package "COMMON-LISP-USER">
[4]     Print name: "TEST"
[5]     Value: #<Unbound>
[6]     Function: #<Compiled-function TEST #x302000C5EFFF>
[7]     Arglist: (X)
[8]     Plist: NIL
Inspect> 

Note that both #'print and #'test are listed as 'compiled'. This means that the only performance difference between loading a .lisp file, and loading a compiled file, is the compile time. Which I'm assuming is not the bottleneck in your scenario. It's usually not, unless you are using a bunch of macros, and performing a code transformation is the primary purpose of your program.

This is one of the main reasons that I don't ever deal with compiled lisp files. I just load all shared libraries/packages I need up in my core file, and then load a few specific .lisp functions/files on top of that when I'm working on a particular project. And, at least for SBCL and CCL for me, everything is listed as 'compiled'.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!