Planet Haskell without Tom Moertel's postsPipes Output
http://pipes.yahoo.com/pipes/pipe.info?_id=Jti_id1G3hGa1KrxdPQQIA
Wed, 01 Oct 2014 16:18:09 +0000http://pipes.yahoo.com/pipes/Why Traversable/Foldable should not be in the Prelude
http://neilmitchell.blogspot.com/2014/10/why-traversablefoldable-should-not-be.html
Neil Mitchelltag:blogger.com,1999:blog-7094652.post-3546622401521967212Wed, 01 Oct 2014 12:02:00 +0000September 2014 1HaskellADay problems and solutions
http://logicaltypes.blogspot.com/2014/10/september-2014-1haskelladay-problems.html
September, 2014September 1st, 2014: They tried to kill the Metal...I don't know where I'm going with that. But rock-n-roll with today's #haskell exercise http://lpaste.net/110331September 2nd, 2014: Good morning! Triangle Sums is our #haskell problem for today: http://lpaste.net/110404 No triangles were harmed in the solution of their sum (nor in the summation of their solution) http://lpaste.net/110432September 3rd, 2014: Pay It Forward. What? You didn't think I'd just say: today's #haskell problem is hard and leave it at that, did you? http://lpaste.net/110444 Paid. Or: a constructivist approach reduces the generated sets from 200M+ down to 8 possible solutions http://lpaste.net/110684 That's doable. ... and here is the 'Mr. Clean' version of the solution: fast, and neat. Groovy! http://lpaste.net/110685September 4th, 2014: Today's #haskell problem: Abacus words http://lpaste.net/110494 because MRFE says "I don't like your math problems; I want more word problems"September 5th, 2014: These 'edgy' relationships these days! Remember when today's #haskell problem didn't involve graph theory? http://lpaste.net/110543 Data.Graph FTW! http://lpaste.net/110571 A solution to today's 4sum #haskell problem, and it didn't require generating 1625702400 solutions!September 8th, 2014: We have puzzles 1 and 5 from the "Montley Stew" problem set for today's #haskell problem http://lpaste.net/110699 The solution-style to Montley Stew isswimming in list-compression stew http://lpaste.net/110750September 9th, 2014: King Tut! http://lpaste.net/110789 Our #haskell problem for today is NOT a Pyramid Scheme. Maybe.September 10th, 2014: 'Sed' is 'but(t)' just another word ... in "'laddin" http://lpaste.net/110833 Today's #haskell problem is mix-n-match words. "But(t) I sed ..." ARG! Enough with the 3rd-grade humor! On with the solution to the mix-n-match words! http://lpaste.net/110859September 11th, 2014: A-plus for you when you solve today's #haskell exercise http://lpaste.net/110862 But an F- (NOT an F# ... geddit?) for /usr/share/dict/words :/ A solution to today's injectInto #haskell problem http://lpaste.net/110891September 12th, 2014: Today's #Haskell problem comes from She. She who commands you solve it before coding it. http://lpaste.net/110925 So, you know: there it is. Okay, 'thurt' is a word in WHICH Universe? A solution to today's #haskell 'ditto' problem http://lpaste.net/110955September 15th, 2014: "The name of the game is Connect Four!" and today's #haskell problem http://lpaste.net/111065 as suggested by a tweet from @DrEugeniaCheng. I played Connect 4 against myself and lost! :/ A semi-solution to today's #haskell problem at http://lpaste.net/111105September 16th, 2014: There's more than one way to slice and dice a matrix for today's #haskell problem http://lpaste.net/111109 (follow-up to yesterday's Connect4) A Hack-n-slash solution to today's diagonal view of matrices. http://lpaste.net/111130 Thebonus solution is provided back at the Connect Four answer to make that game complete: http://lpaste.net/111105September 17th, 2014: Do you Yahoo!? Today's #haskell problem: connecting to Yahoo!'s financial webservice http://lpaste.net/111071 I like my java GREEN! http://lpaste.net/111238 (java means 'coffee') A solution to stock webservice #haskell problem.September 18th, 2014: Star (Tuna?) Fish? http://lpaste.net/111216 A radial word-finding/matching game is today's #haskell puzzle. Wait. Quantum...WHAT? http://lpaste.net/111259 A solution to today's #haskell problem using quantum superpositions to solve it. I'm not joking. STAR POWER! http://lpaste.net/edit/111259 A solution for pretty-printing the star-puzzleSeptember 19th, 2014: Continued fractions and dual inversionals are today's #haskell problem http://lpaste.net/111067 It even comes with (thoughts about) flowers. #Ult Today's problem was inspired by a comment, then the main article, from @aperiodicalhttp://aperiodical.com/2013/11/from-the-mailbag-dual-inversal-numbers/#comment-611141 That was some LOOOOOOOONG Division! http://lpaste.net/111314 A solution to today's #haskell problem.September 22nd, 2014: Oh, noes! The 'M'-word! http://lpaste.net/111419 for today's #haskell exercise. Project Euler, problem 11: 'M' is for 'Monoid' http://lpaste.net/111435 A solution to today's #haskell problem.September 23rd, 2014: "Oh, the snark bites, with his teeth, dear." MACD Knife ... geddit? http://lpaste.net/111468 Today's #haskell problem is a technical indicator.September 24th, 2014: Jones, Smith, and Brown work at the Bank... but not Mr. Banks.A logic puzzle from 1957 for today's #haskell puzzle http://lpaste.net/111461. A pair of PhDs (http://lpaste.net/111580) helped to solve today's #haskell problem. Neatly, too, I might add.September 25th, 2014: Corned-beef hashi? http://lpaste.net/111452 No, that's not right, and now I'm hungry! :( Shoot! Well, anyway: today's #haskell problem.September 26th, 2014: HA! I TOLD you we'd be getting to cipher-text! http://lpaste.net/111459 From the 1700's, still: it IS cipher text for today's #haskell problem. Ooh! The Plot Thickens (like pea soup)! http://lpaste.net/111679 "ALLMENHAVING" and "be mis T r U st ed " is a solution to today's problem.September 29th, 2014: Big (Crypto) Generator! http://lpaste.net/111795 Today's #haskell problem is a follow-on to Friday's. Human Error ... WHAT human error?http://lpaste.net/111829 A solution to today's make-your-own-cypto-table #haskell problemSeptember 30th, 2014: "I wanna be a blue-collar man!" Yes, but who, and which occupation? http://lpaste.net/111755 Today's #haskell problem addresses this. Plumber, ... or painter? IDK! http://lpaste.net/111876 TWO-solutions to today's #haskell problem (one of them MAY be correct... :/ )geophftag:blogger.com,1999:blog-4650294074444534066.post-8541269128157744067Wed, 01 Oct 2014 10:44:00 +0000GHC Weekly News - 2014/09/30
http://ghc.haskell.org/trac/ghc/blog/weekly20140930
thoughtpolicehttp://ghc.haskell.org/trac/ghc/blog/weekly20140930Tue, 30 Sep 2014 20:07:51 +0000List of publications and talks updated
http://jeltsch.wordpress.com/2014/09/30/list-of-publications-and-talks-updated/
The list of publications and talks on my website has been pretty much out of date. Now it is updated, for you to enjoy. :-) Tagged: publication, talkWolfgang Jeltschhttp://jeltsch.wordpress.com/?p=606Tue, 30 Sep 2014 16:22:47 +0000Announcing Yesod 1.4
http://www.yesodweb.com/blog/2014/09/announcing-yesod-1-4
We are happy to announce the release of Yesod 1.4. This includes:Releases of all Yesod packages to support version 1.4.The book content on yesodweb.com is completely updated for Yesod 1.4, with all snippets confirmed to compile and most of the text proofread from scratch for accuracy (in the next week the rest will be finished).A new Stackage snapshot available for GHC 7.8.3.Its worth mentioning that there have been a ton of improvements to Yesod since version 1.2, they just didn't need any breaking changes.Thanks to everyone who provided code, feedback, and testing for this release, it
should be a very solid one!Here's a collection of links that provide various other pieces of information about this release:Announcing Persistent 2Persistent 2.1 Release CandidatePlanning Yesod 1.4Yesod 1.4 release dateChangelogWhat is most exciting to report is that this was a very minor change to Yesod, and
therefore most code should be upgradeable with minor changes. First, the
changelog of breaking changes:New routing system with more overlap checking controlThis requires OverloadedStrings and ViewPatterns.
The generated code is faster and much more readable.Yesod routes are not just type-safe, they also check for overlapping that could cause ambiguity. This is a great feature, but sometimes it gets in your way.
Overlap checking can be turned off for multipieces, entire routes, and parent routes in a hierarchy. For more information, see the commit comment.Dropped backwards compatibility with older versions of dependenciesIn particular, persistent-1 and wai-2. We will talk more about persistent 2.
wai-3 uses a CPS style that will require some middleware to have an additional CPS paramter.
Looking at the wai-extra source code can help with upgrading, but it should just be adding an extra parameter.yesod-auth works with your database and your JSONThere is better support for non-persistent backends in yesod-auth. See pull request 821 for details. For most users, you can fix this by adding instance YesodAuthPersist App to your Foundation.hs.yesod-auth already released a breaking change to be able to accept JSON everywhere.
That bumped the version to 1.3
We like to keep the yesod-* packages in sync, so now everything is getting bumped to 1.4 together.In the 1.4 release, we also fixed requireAuth and and requireAuthId to return a 401 response when a JSON response is requested. See pull request 783.yesod-test sends HTTP/1.1 as the versionThis may require updating tests to expect 303 instead of 302 redirects.Type-based caching with keys.The Type-based caching code was moved into a separate module without Yesod dependencies and documented. If there is interest in seeing this as a separate package let us know, but it is also pretty easy to just copy the module.To me, TypeCache is a beautiful demonstration of Haskell's advanced type system that shows how you can get the best of both worlds in a strongly typed language.type TypeMap = HashMap TypeRep DynamicAbove we have the wonderful juxtaposition of Haskell's strong typing in the Key, and dynamic typing in the value. This HashMap is used to cache the result of a monadic action.cached :: (Monad m, Typeable a)
=> TypeMap
-> m a -- ^ cache the result of this action
-> m (Either (TypeMap, a) a) -- ^ Left is a cache miss, Right is a hitDynamic is used to have a HashMap of arbitrary value types.
TypeRep is used to create a unique key for the cache.
Yesod uses this to cache the authentication lookup of the database for the duration of the request.newtype CachedMaybeAuth val = CachedMaybeAuth { unCachedMaybeAuth :: Maybe val }
deriving Typeable
cachedAuth
= fmap unCachedMaybeAuth
. cached
. fmap CachedMaybeAuth
. getAuthEntityCachedMaybeAuth is a newtype that isn't exported. TypeRep is specific to a module, so this pattern guarantees that your cache key will not conflict outside of your module.This functionality was in yesod-1.2 even though the code was not separated into a new module.
The 1.4 release adds the ability to cache multiple values per typetype KeyedTypeMap = HashMap (TypeRep, ByteString) Dynamic
cachedBy :: (Monad m, Typeable a)
=> KeyedTypeMap
-> ByteString -- ^ a cache key
-> m a -- ^ cache the result of this action
-> m (Either (KeyedTypeMap, a) a) -- ^ Left is a cache miss, Right is a hitThis is useful if your monadic action has inputs: if you serialize them to a ByteString you can use thm as a key.Upgrade guideThe most significant set of changes in the Yesod ecosystem actually landed in
Persistent 2. However, these were mostly internal changes with new features that maintain backwards compatibility, so many users will be unaffected.To kickoff the upgrade process, you need to change update your cabal file to allow yesod version 1.4.
If you had constraints on persistent, update them to > 2.1
If you are using cabal freeze to peg your versions in the cabal.config file, cabal will provide you no assistance in making a smooth upgrae.
You are probably going to want to delete a whole lot of things in cabal.config (or possibley the entire file), and upgrade a lot of dependencies at once.
When you are done and things compile again, you will want to do a cabal freezeAs has become the custom for each major release, the upgrade
process is documented by the diff of the Haskellers code base upgrading to Yesod 1.4.
For Haskellers it was pretty simple.In sum:Replace type YesodPersistBackend App = SqlPersist with type YesodPersistBackend App = SqlBackend.Add instance YesodAuthPersist App to Foundation.hs.Add the ViewPatterns language extension.If you have more complex persistent code you may have more to do.
Look at the previous post on persistent-2.1http://www.yesodweb.com/blog/2014/09/announcing-yesod-1-4Tue, 30 Sep 2014 11:00:00 +0000How we might abolish Cabal Hell, part 1
http://www.well-typed.com/blog/99
At ICFP a few weeks ago a hot topic in the corridors and in a couple talks was the issues surrounding packaging and “Cabal Hell”.
Fortunately we were not just discussing problems but solutions. Indeed I think we have a pretty good understanding now of where we want to be, and several solutions are in development or have reasonably clear designs in peoples’ heads.
I want to explain what’s going on for those not already deeply involved in the conversation. So this is the first of a series of blog posts on the problems and solutions around Cabal hell.
There are multiple problems and multiple solutions. The solutions overlap in slightly complicated ways. Since it is a bit complicated, I’m going to start with the big picture of the problems and solutions and how they relate to each other. In subsequent posts I’ll go into more detail on particular problems and solutions.
“Cabal hell”: the problems
So what is “Cabal hell”? Let’s consult the dictionary…
Cabal Hell
The feeling of powerlessness one has when Cabal does not do what one wanted and one does not know how to fix it.
I’m joking obviously, but my point is that Cabal hell is not a precise technical term. There are a few different technical problems (and misunderstandings and UI problems) that can cause Cabal hell.
A useful concept when talking about this topic is that of the packaging “wild wild west”. What we mean is whether we are in a context where we reasonably expect packages to work together (because there has been some deliberate effort to make them work together), or if we are in the “wild wild west”. In the “wild wild west” we have to do things like deal with packages that were uploaded yesterday by multiple different authors. The point is that nobody has yet had time to try and make things consistent. It is a useful concept because we have developers who need to deal with the “wild wild west” and those who would really rather not, and the solutions tend to look a bit different.
Another term we often use when talking about packages is “consistency”. What we mean is that in a collection of packages there is at most one version of each package. For example when you ask cabal-install to install package A and B, we say that it will try to find a “consistent” set of dependencies – meaning a set including A, B and their dependencies that has only one version of any package.
“Cabal hell”: the symptoms
So lets consider a breakdown of the technical problems. To start with lets look at a breakdown based on the symptoms that a developer in Cabal Hell experiences
We can first break things down by whether there is a solution or not. That is, whether a perfect dependency resolver could find a plan to install the package(s) and their dependencies consistently. We want such a solution because it’s a prerequisite for installing working packages. (We’re ignoring the possibility that there is a solution but the solver fails to find one. That is possible but it’s a relatively rare problem.)
Given the situation where the solver tells us that there is no solution, there are a few different cases to distinguish:
No solution expected
The failure was actually expected. For example a developer updating their package to work with the latest version of GHC is not going to be surprised if their initial install attempt fails. Then based on what the solver reports they can work out what changes they need to make to get things working.
Solution had been expected
The more common case is that the developer was not expecting to be working in the wild west. The developer had an expectation that the package or packages they were asking for could just be installed. In this case the answer “no that’s impossible” from the solver is very unhelpful, even though it’s perfectly correct.
Unnecessary solver failure
The symptoms here are exactly the same, namely the solver cannot find a solution, but the reason is different. More on reasons in a moment.
Even when there is a solution we can hit a few problems:
Compile error
Compilation can fail because some interface does not match. Typically this will manifest as a naming error or type error.
Breaking re-installations
Cabal’s chosen solution would involve reinstalling an existing version of a package but built with different dependencies. This re-installation would break any packages that depend on the pre-existing instance of the installed package. By default cabal-install will not go ahead with such re-installation, but you can ask it to do so.
Type errors when using packages together
It is possible to install two package and then load them both in GHCi and find that you cannot use them together because you get type errors when composing things defined in the two different packages.
“Cabal hell”: the reasons
So those are the major problems. Lets look at some reasons for those problems.
Inconsistent versions of dependencies required
There are two sub-cases worth distinguishing here. One is where the developer is asking for two or more packages that could be installed individually, but cannot be installed and used together simultaneously because they have clashing requirements on their common dependencies. The other is that a package straightforwardly has no solution (at least with the given compiler & core library versions), because of conflicting constraints of its dependencies.
Constraints wrong
With under-constrained dependencies we get build failures, and with over-constrained dependencies we get unnecessary solver failures. That is, a build failure is (almost always) due to dependency constraints saying some package version combination should work, when actually it does not. And the dual problem: an unnecessary solver failure is the case where there would have been a solution that would actually compile, if only the constraints had been more relaxed.
Single instance restriction
Existing versions of GHC and Cabal let you install multiple versions of a package, but not multiple instances of the same version of a package. This is the reason why Cabal has to reinstall packages, rather than just add packages.
Inconsistent environment
These errors occur because cabal-install does not enforce consistency in the developer’s environment, just within any set of packages it installs simultaneously.
We’ll go into more detail on all of these issues in subsequent posts, so don’t worry if these things don’t fully make sense yet.
“Cabal hell”: the solutions
There are several problems and there isn’t one solution that covers them all. Rather there are several solutions. Some of those solutions overlap with each other, meaning that for some cases either solution will work. The way the solutions overlap with the problems and each other is unfortunately a bit complicated.
Here’s the overview:
So what does it all mean?
We’ll look at the details of the solutions in subsequent posts. At this stage the thing to understand is which solutions cover which problems, and where those solutions overlap.
We’ll start with the two most important solutions. They’re the most important in the sense that they cover the most cases.
Nix-style persistent store with multiple consistent environments
This solves all the cases of breaking re-installations, and all cases of inconsistent environments. It doesn’t help with wrong constraints.
You’ll note that it covers some cases where there is no solution and you might wonder what this can mean. Some cases where there is no solution are due to two (or more) sets of packages that could be installed independently but cannot be installed together consistently. In a nix-style setting it would be possible to offer developers the option to install the packages into separate environments when the solver determines that this is possible.
Curated consistent package collections
These are things like the Debian Haskell packages or Stackage. This solves some cases of each of the different problems: breaking re-installations, inconsistent environments, wrong constraints and lack of consistent solutions. It solves those cases to the extent that the package collection covers all the packages that the developer is interested in. For many developers this will be enough. Almost by definition however it cannot help with the “wild west” of packages because the curation takes time and effort. Unless used in combination with a isolated environment solution (e.g. nix-style, but also less sophisticated systems like hsevn or cabal sandboxes) it does not allow using multiple versions of the collection (e.g. different projects using different Stackage versions).
It is worth noting that these two solutions should work well together. Neither one subsumes the other. We don’t need to pick between the two. We should pick both. The combination would get us a long way to abolishing Cabal hell.
There are also a number of smaller solutions:
Automatic build reporting
This helps with detecting compile errors arising from constraints that are too lax. It doesn’t help with constraints that are too tight. This solution requires a combination of automation and manual oversight to fix package constraints and to push those fixes upstream.
Upper-bound build bots
This is similar to gathering build reports from users, but instead of looking at cases of compile failure (constraints too lax), it explicitly tries relaxing upper bounds and checks if things still compile and testsuites work. Again, this requires automation to act on the information gleaned to minimise manual effort.
Package interface compatibility tools
This is to help package authors get their dependency constraints right in the first place. It can help them follow a version policy correctly, and tell them what minimum and maximum version bounds of their dependencies to use. It does not completely eliminate the need to test, because type compatibility does not guarantee semantic compatibility. Solutions in this area could eliminate a large number of cases of wrong constraints, both too lax and too tight.
Private dependencies
This allows solutions to exist where they do not currently exist, by relaxing the consistency requirement in a safe way. It means global consistency of dependencies is not always required, which allows many more solutions. This solution would cover a lot of cases in the “wild wild west” of packaging, and generally in the large set of packages that are not so popular or well maintained as to be included in a curated collection.
Next time…
So that’s the big picture of the problems and solutions and how they relate to each other. In subsequent posts we’ll look in more detail at the problems and solutions, particularly the solutions people are thinking about or actively working on.duncanhttp://www.well-typed.com/blog/99Tue, 30 Sep 2014 10:19:33 +0000Persistent 2.1 released
http://www.yesodweb.com/blog/2014/09/persistent-2
Persistent 2.1, a stable release of the next generation of persistent is released to Hackage.Persistent is an ORM for Haskell that keeps everything type-safe.Persistent 2.1 featuresa flexible, yet more type-safe Key typea simplified monad stackI already announced persistent 2
and the 2.1 release candidate.Everyone should set their persistent dependencies to > 2.1 && < 3. 2.0.x was the unstable release and is now deprecated.I want to thank all the early persistent 2 adopters for putting up with a fast-moving, buggy code base. This was an experiment in shipping an unstable version, and what I learned from it is that it was a great process, but we need to make sure Travis CI is running properly, which it is now!Persistent 2.1 library supportThe persistent and persistent-template libraries should support any kind of primary key type that you need. The backends are still catching up to the new featurespersistent-sqlite backend has fully implemented these features.persistent-postgres and persitent-mysql don't yet support changing the type of the id fieldpersistent-mongoDB does not yet support composite primary keysAll of the above packages except persistent-mysql are being well maintained, but just developing new features at their own pace. persistent-mysql is in the need of a dedicated maintainer. There are some major defects in the migration code that have gone unresolved for a long time now.persistent-redis is in the process of being upgraded to 2.1persistent-zookeeper was just released, but it is on persistent 1.3.*There are other persistent packages out there that I have not had the chance to check on yet, most noteably persistent-odbc. Feel free to ask for help with upgrading.Persistent 2.1 upgrade guideSimple persistent usage may not need any changes to upgrade.The fact that the Key type is now flexible means it may need to be constrained.
So if you have functions that have Key in the type signature that are not specific to one PersistEntity, you may need to constrain them to the BackendKey type.
An easy way to do this is using ConstraintKinds.type DBEntity record =
( PersistEntityBackend record ~ MongoContext
, PersistEntity record
, ToBackendKey MongoContext record
)A SQL user would use SqlBackend instead of MongoContext. So you can now change the type signature of your functions:- PersistEntity record => Key record
+ DBEntity record => Key recordDepending on how you setup your monad stacks, you may need some changes.
Here is one possible approach to creating small but flexible Monad stack type signatures.
It requires Rank2Types, and the code show is specialized to MongoDB. type ControlIO m = ( MonadIO m , MonadBaseControl IO m)
type LogIO m = ( MonadLogger m , ControlIO m)
-- these are actually types, not constraints
-- with persistent-2 things work out a lot easier this way
type DB a = LogIO m => ReaderT MongoContext m a
type DBM m a = LogIO m => ReaderT MongoContext m aThe basic type signature is just DB () (no constraints required).
For working with different monad stacks, you can use DBM.
If you are using conduits, you will have MonadResource m => DBM m ().
Here is another example:class Monad m => HasApp m where
getApp :: m App
instance HasApp Handler where
getApp = getYesod
instance HasApp hasApp => HasApp (ReaderT MongoContext hasApp) where
getApp = lift $ getApp
instance MonadIO m => HasApp (ReaderT App m) where
getApp = ask
-- | synonym for DB plus HasApp operations
type DBApp a = HasApp m => DBM m a
type DBAppM m a = HasApp m => DBM m a With this pattern our return type signature is always ReaderT MongoContext m, and we are changing m as needed. A different approach is to have a return type signature of m and to place a MonadReader constraint on it.type Mongo m = (LogIO m, MonadReader MongoContext m)Right now this approach requires using a call to
Database.MongoDB.liftDB around each database call, but I am sure there are approaches to dealing with that. One approach would be to wrap every persistent "primitive" with liftDB.http://www.yesodweb.com/blog/2014/09/persistent-2Mon, 29 Sep 2014 23:51:38 +0000Structure and Efficiency of Computer Programs
http://existentialtype.wordpress.com/2014/09/28/structure-and-efficiency-of-computer-programs/
For decades my colleague, Guy Blelloch, and I have been promoting a grand synthesis of the two “theories” of computer science, combinatorial theory and logical theory. It is only a small exaggeration to say that these two schools of thought operate in isolation. The combinatorial theorists concern themselves with efficiency, based on hypothetical translations of high-level algorithms to low-level machines, and have no useful theory of composition, the most important tool for developing large software systems. Logical theorists concern themselves with composition, emphasizing the analysis of the properties of components of systems and how those components may be combined; the entire subject of logic is a theory of composition (entailment). But relatively scant attention is paid to efficiency, and, to a distressingly large extent, the situation is worsening, rather than improving.
Guy and I have been arguing, through our separate and joint work, for the applicability of PL ideas to algorithms design, leading, for example, to the concept of adaptive programming that has been pursued aggressively by Umut Acar over the last dozen years. And we have argued for the importance of cost analysis, for various measures of cost, at the level of the code that one actually writes, rather than in terms of how it is supposedly compiled. Last spring, after initial discussions with Anindya Banerjee at NSF last winter, I decided to write a position paper on the topic, outlining the scientific opportunities and challenges that would arise from an attempt to unify the two, disparate theories of computing. The first draft was circulated privately in May, and was revised, very lightly, in July in preparation for a conference call sponsored by NSF among a group of leading researchers in both PL’s and Algorithms with the goal to find common ground and isolate key technical challenges.
I am delighted (and relieved) to see that Swarat Chaudhuri, in a post on his PL Enthusiast blog, has heartily endorsed our proposal for a grand synthesis of the “two theories” of Computer Science. During his visit, Swarat had lengthy discussions with Umut, Guy, and me on our work in both research and education, but were surprised and disheartened by his opposition to our approach. His concern was based on the common misapprehension that it is impossible to give a useful cost model for the lambda calculus, which would thereby undermine the entire body of work on which Guy and I, among others, had been pursuing for decades. Coming from such a distinguished researcher as Chaudhuri, his opposition created for us a period of anxiety, could we be wrong? But no, it is simply not true. Guy and John Greiner provided an adequate cost model for the lambda calculus (under both a sequential and a parallel interpretation) in 1993, and that paper has withstood decades of scrutiny. But it did take quite some time to sort this out to be absolutely sure. For some mysterious reason, when it comes to the lambda calculus nonsense gets twice around the world before the truth can get its boots on, to borrow a turn of phrase from Mark Twain. After some back-and-forth, the matter is settled, and I am delighted that we can now count Swarat among our supporters. It would have been a heavy burden for us to have to bear the opposition of a distinguished researcher such as himself to the very foundations of our proposed program.
Which is not to say that there are not serious obstacles to be overcome if such a grand synthesis is to be accomplished. The first step is to get the right people together to discuss the issues and to formulate a unified vision of what are the core problems, and what are promising directions for the short- and long-term. To this end there is likely to be a workshop held during the next academic year to start addressing these problems at a scientific level. Contrary to what is implied in the PL Enthusiast post, my position paper is not a proposal for funding, but is rather a proposal for a scientific meeting designed to bring together two largely (but not entirely) disparate communities. This summer NSF hosted a three-hour long conference call among a number of researchers in both areas with a view towards formulating a workshop proposal in the near future. Please keep an eye out for future announcements. I think there are many good problems to be considered, and many opportunities for new results.
I would like to mention that I am particularly grateful to Anindya Banerjee at NSF for initiating the discussion last winter that led to my position paper, and for helping to move forward the process of formulating a workshop proposal. And I am very happy to learn of Swarat’s endorsement; it will go a long way towards helping attract interest from both sides of the great divide.Filed under: Research Tagged: algorithms, programming languages, researchRobert Harperhttp://existentialtype.wordpress.com/?p=1011Mon, 29 Sep 2014 00:11:55 +0000Abstract Types are Existential
http://jozefg.bitbucket.org/posts/2014-09-29-abstraction-existentials.html
http://jozefg.bitbucket.org/posts/2014-09-29-abstraction-existentials.htmlMon, 29 Sep 2014 00:00:00 +0000sml-family.org up and running!
http://existentialtype.wordpress.com/2014/09/26/sml-family-org-up-and-running/
After far too long, and far too many obstacles to be overcome, Dave MacQueen, Lars Bergstrom, and I have finally prepared an open-source site for the entire family of languages derived from Standard ML. The defining characteristic of Standard ML has always been that it has a mathematically precise definition, so that it is always clear what is a valid program and how it should behave. And indeed we have seven different working compilers, all of which are compatible with each other, with the exception of some corner cases arising from known mistakes in the definition. Moreover, there are several active projects developing new variations on the language, and it would be good to maintain the principle that such extensions be precisely defined.
To this end the sources of the 1990 and 1997 versions of the definition are on the web site, with the permission of MIT Press, as is the type-theoretic definition formulated by Chris Stone and me, which was subsequently used as the basis for a complete machine-checked proof of type safety for the entire language done by Karl Crary, Daniel K. Lee. It is be hoped that the errors in the definition (many are known, we provide links to the extensive lists provided by Kahrs and Rossberg in separate investigations) may now be corrected by a community process. Anyone is free to propose an alteration to be merged into the main branch, which is called “SML, The Living Language” and also known as “Successor ML”. One may think of this as the third edition of the definition, but one that is in continual revision by the community. Computer languages, like natural languages and like mathematics, belong to us all collectively, and we all contribute to their evolution.
Everyone is encouraged to create forks for experimental designs or new languages that enrich, extend, or significantly alter the semantics of the language. The main branch will be for generally accepted corrections, modifications, and extensions, but it is to be expected that completely separate lines of development will also emerge.
The web site, sml-family.org is up and running, and will be announced in various likely places very soon.
Update: We have heard that some people get a “parked page” error from GoDaddy when accessing sml-family.org. It appears to be a DNS propagation problem.
Update: The DNS problems have been resolved, and I believe that the web site is stably available now as linked above.Filed under: Research Tagged: functional programming, smlRobert Harperhttp://existentialtype.wordpress.com/?p=1004Fri, 26 Sep 2014 19:06:44 +0000GHCID - a new GHCi based IDE (ish)
http://neilmitchell.blogspot.com/2014/09/ghcid-new-ghci-based-ide-ish.html
Neil Mitchelltag:blogger.com,1999:blog-7094652.post-3605225370963117057Sat, 27 Sep 2014 20:46:00 +0000Reading highlights this week
http://wellquite.org/blog/reading_highlights_this_week.html
http://wellquite.org/blog/reading_highlights_this_weekSat, 27 Sep 2014 14:08:00 +0000Generating Open 3D Viewer Models
http://neilmitchell.blogspot.com/2014/09/generating-open-3d-viewer-models.html
Neil Mitchelltag:blogger.com,1999:blog-7094652.post-6245597356248178418Wed, 24 Sep 2014 16:00:00 +0000Announcing: Heist v0.14
http://snapframework.com/blog/2014/09/24/heist-0.14-released
The Snap team is happy to announce the release of version 0.14 of Heist.Major ChangesNamespace SupportHeist now has support for namespaces. This allows you to configure Heist so that all of your splices require a namespace. Requiring a namespace allows Heist to be more aggressive with errors for unbound splices. For instance, imagine you set the hcNamespace field in your HeistConfig to “h”, and you bind two splices.mySplices = do
"foo" #! fooSplice
"bar" #! barSpliceWith this setup, you would put the “h” namespace on all of the splice tags in your templates. Instead of calling those splices with “ ” and “ ”, you would use “ ” and “ ”. So why go to all this trouble so you have to type more? Because it allows Heist to do more error checking. Without namespaces there is no way for Heist to know whether a tag is supposed to be a splice or not. We could use the list of valid HTML tags to infer it, but that significantly constrains the scope of what we intended for Heist. This approach allows the user to be explicit about which tags should be splices. If you do not want to use namespaces, set the namespace to the empty string.Along with the namespace field, we introduced the hcErrorNotBound for controlling the error checking. When hcErrorNotBound is True, Heist will generate an error message if it encounters any namespaced tags that do not have splices bound for them. This eliminates a large class of bugs where users were using a splice in their templates, but forgot to bind it in their code. The intent is that users will usually want to have this error checking turned on. But we felt it was also important to be able to use namespacing without the strict enforcement, so we included this flag to give users full control.Generalized Error ReportingSince this release is doing more error checking, we decided to expose error facilities to the user. This release exposes a new function tellSpliceError that you can use when error conditions are detected in your splices. If you are using compiled Heist, then all your templates will be processed up front at load time. If any of your load time or compiled splices detect an error condition that the user needs to fix, you can call tellSpliceError with an error message. If there are any errors, Heist initialization will fail and all the errors will be returned.Restructured HeistConfigThe addition of hcNamespace and hcErrorNotBound to HeistConfig required some restructuring. Previously HeistConfig had a Monoid instance, but we removed that since the new fields make it unclear which instance should be used. But we also did not want to completely get rid of the monoid functionality either. So in order to get the best of both worlds, we refactored all of HeistConfig’s previous fields into another data structure called SpliceConfig. This way we can keep the Monoid instance for SpliceConfig and still avoid imposing a weird set of semantics on the user.Unfortunately, given the use of field accessors it was impossible to make this change without breaking backwards compatibility. What seems like it should have been a simple addition of a couple parameters ended up being a more significant refactoring. To make these kinds of changes easier in the future Heist now exports lenses to all of the HeistConfig fields as well as an emptyHeistConfig value to use as a starting point. These lenses work with both the lens and lens-family-core packages and we export them without incurring a dependency on either lens package.The HeistConfig constructor and field accessors have been moved to the Heist.Internal.Types module, so if you really need them, they are still available. However, be aware that Internal modules are subject to change without the deprecation cycle that we use for other modules.Minor improvementsFactored out SpliceAPI module into separate map-syntax package and deprecated the old module which will be removed in the next major release.Snap has been updated to support this Heist 0.14.Doug Beardsleyhttp://snapframework.com/blog/2014/09/24/heist-0.14-releasedWed, 24 Sep 2014 10:10:00 +0000Moving to Hakyll
http://therning.org/magnus/archives/1214
I’ve not written much here lately. Partly it’s because I’ve been busy, but mostly I just find the whole experience of logging in and writing in a crappy editor (i.e. it’s not Vim) is starting to be so irritating that I don’t want to write. So in an attempt to rekindle my will to write, I’m switching to using Hakyll.
As anyone knows who’s had a look at Hakyll its out-of-the-box experience is rather bare bones. In other words it’ll probably take a little while before I have a new site, and even longer before I have a pretty one.Magnushttp://therning.org/magnus/?p=1214Tue, 23 Sep 2014 20:58:10 +0000Haskell courses and Haskell eXchange
http://www.well-typed.com/blog/98
In the beginning of October, my colleage Adam Gundry and I will spend a full week in London again for Haskell-related activities: on Monday and Tuesday (October 6–7), we will teach Fast Track to Haskell, a two-day introduction course to Haskell, targeted at people who want to get started with Haskell or learn more about functional programming in general. On Wednesday (October 8), there’s the Haskell eXchange, a one-day conference full of interesting talks on Haskell-related topics. On Thursday and Friday (October 9–10), we will look at more advanced Haskell concepts and programming patterns in Advanced Haskell.
All three events are still open for registration.
Haskell eXchange
The Haskell eXchange will take place in London for the third time now, and I’m happy to report that there’s going to be a really fantastic program again:
As is almost traditional by now, Simon Peyton Jones (Microsoft Research) himself will open the conference, this time with a talk on “Safe, zero-cost coercions in Haskell”.
This is followed by Jeremy Gibbons (University of Oxford) with “Categories for the Working Haskeller”, explaining to Haskell developers and people who are interested in Haskell what role category theory plays in Haskell and if/how categories can be helpful.
I’m very pleased that Bryan O’Sullivan (Facebook) has agreed to give a presentation at the Haskell eXchange this year. As the author of countless Haskell libraries (many of which are among the most used in the entire Haskell ecosystem), and being a co-author of the widely known O’Reilly book “Real World Haskell”, he’s certainly learned how to squeeze a bit of extra performance out of Haskell code when needed. He’s going to share his experiences and provide valuable advice in his talk.
After lunch, we’ll continue with a pair of talks looking at using Haskell for the development of RESTful web services from slightly different angles. Erik Hesselink (Silk) is going to present the rest framework, which makes it easy to develop and maintain REST APIs, independent of the underlying web framework you use. After that, Chris Dornan (Iris Connect) and Adam Gundry (Well-Typed) will talk about api-tools and in particular address the question of how you can solve the problem of schema migrations nicely.
Tim Williams and Peter Marks (both Barclays) will give a joint talk on Lucid, their in-house non-embedded DSL that is written in Haskell and has a mostly structural type system with interesting features such as row polymorphism and extensible records as well as extensible sum-types.
The talks will be concluded by Oliver Charles (Fynder), well-known for his tireless efforts in the “24 days of Hackage” series, who is going to show us how the use of GHC’s most advanced type system extensions helps him write better real-world code at his company.
After the talks, there’s going to be pizza and beer and an informal “Park Bench Discussion” where we can all discuss the questions that have been raised throughout the day in more detail.
I hope you’re as excited about this program as I am: I think there’s a fantastic range of topics covered, from language features and theoretical aspects, practical advice for programmers to interesting case studies of real-world code. Also, it’s an excellent opportunity to meet fellow developers interested in Haskell. If you’re working for a company using Haskell and are looking for new developers, this may be an excellent opportunity to recruit. On the other hand, if you’d like nothing more than a Haskell job, this is an opportunity to meet people who are actually using it in practice, and may have a job for you or at least be able to give you advice on how to find one.
If you haven’t registered yet, please consider doing so! We’re looking forward to meeting you there.
Fast Track to Haskell and Advanced Haskell
These two successful courses have been offered on a regular basis since October 2012. They’re continuously updated to reflect the latest changes in the Haskell world, such as updates to the infrastructure, new features of the main Haskell compiler GHC, or exciting new libraries.
Both courses are hands-on, comprising a combination of lectures, interactive coding and exercises that the participants are supposed to work on alone or in teams, with the help of the teacher(s).
The Fast Track course is for people who know how to program, but have little or no experience in Functional Programming or Haskell. It teaches Haskell in from scratch in just two days, covering important concepts such as datatypes, polymorphism, higher-order functions, type classes, how IO works in Haskell, and ending with an introduction to monads. It’s also interesting for people who are interested in learning about functional programming in general, because Haskell is a prime example of a functional programming language, and the course focuses on the important programming concepts more than on language peculiarities.
The Advanced Haskell course is for people who have some experience with Haskell, but want to learn more. We’re going to discuss (functional) data structures and their complexity, have a detailed look at how lazy evaluation works and how it is implemented, how to reason about performance and how to use various debugging tools. Somewhat depending on demand, there’s also the option to learn more about advanced programming patterns, such as monads, applicative functors and monad transformers, about concurrency and parallelism, or about the more advanced features of the Haskell type system such as GADTs and type families.
Being in the same week as the Haskell eXchange makes it possible to combine one of the courses (or even both) with the eXchange, where you can hear several other viewpoints and get to know more Haskellers!
Other courses
We’re offering additional training courses on-demand and on-site for companies in Europe, the US, or anywhere in the world on request. See our training page for more information.
HacBerlin
By the way, I’m also going to be at the Haskell Hackathon in Berlin this upcoming weekend. On Sunday, I’m going to give a talk on parser combinators. You can still register for the Hackathon, too. It’s free.andreshttp://www.well-typed.com/blog/98Tue, 23 Sep 2014 10:10:42 +0000The woes of multiple package versions
http://www.yesodweb.com/blog/2014/09/woes-multiple-package-versions
When I've answered the same question more than three times, it's usually time
to write up a blog post explaining the situation in more detail, and just link
people to that in the future. This is such a blog post.Many people working with Haskell end up with one of these two classes of
inexplicable GHC errors:Cannot match ByteString with ByteString, with a whole bunch of package
name and version junk as well.SomeTransformerT is not an instance of MonadTrans, when the
documentation clearly indicates that SomeTransformerT does define such
an instance.How can a ByteString not be a ByteString? Well, there are two ways I can
think of. The first is that you're accidentally trying to mix up a strict
ByteString with a lazy ByteString, whose types clearly don't unify. While
that problem does pop up, in my experience most people figure that one out
pretty quickly. By the time someone's asking a question on Stack
Overflow/Reddit/haskell-cafe, it's usually much more insidious: there are two
copies of bytestring on their system.Imagine this situation: you install GHC 7.6, which ships with
bytestring-0.10.0.2. You then install text via cabal install text. A few days
later, someone mentions that bytestring-0.10.4.0 is out, and it's all new and
shiny, so you go ahead and install it with cabal install bytestring.
Everything works wonderfully, and life is good. Then you decide to experiment
with text a bit, so you write the following program:{-# LANGUAGE OverloadedStrings #-}
import Data.Text.Encoding (encodeUtf8)
import qualified Data.ByteString.Char8 as S8
main :: IO ()
main = S8.putStrLn $ encodeUtf8 "Hello World!"Woe unto you! GHC rejects your program with:foo.hs:6:22:
Couldn't match expected type `S8.ByteString'
with actual type `bytestring-0.10.0.2:Data.ByteString.Internal.ByteString'
In the return type of a call of `encodeUtf8'
In the second argument of `($)', namely `encodeUtf8 "Hello World!"'
In the expression: S8.putStrLn $ encodeUtf8 "Hello World!"When is a ByteString not a ByteString? Here, apprently. Now it turns out
the GHC is actually giving you quite of bit of useful information, you just
need to know what to look for. It's expecting the type S8.ByteString, which
expands to Data.ByteString.Char8.ByteString, which in reality is just a type
synonym for Data.ByteString.Internal.ByteString. So what GHC really means is
that it can't unify the following two types:expected: Data.ByteString.Internal.ByteString
actual: bytestring-0.10.0.2:Data.ByteString.Internal.ByteStringNow the difference just jumps out at you: the actual type comes from the
bytestring-0.10.0.2 package, whereas the first comes from... well, somewhere
else. As I'm sure you're guessing right now, that "somewhere else" is
bytestring-0.10.4.0, but GHC doesn't bother telling us that, since including
that level of information in every error message would be overwhelming. To
step through why this came up exactly:text is installed against bytestring-0.10.0.2 (it was the only version of bytestring available at the time you installed text).Therefore, encodeUtf8 will generate a ByteString value from version 0.10.0.2.Your program imports Data.ByteString.Char8, which is provided by both bytestring-0.10.0.2 and bytestring-0.10.4.0.GHC's default dependency resolution is: take the latest version of each package, in this case 0.10.4.0.Now we have a S8.putStrLn function expecting a 0.10.4.0 ByteString, but an encodeUtf8 function returning a 0.10.0.2 ByteString.So how do we work around this problem? I can think of three ways:Explicitly tell GHC which version of the bytestring package you want to use to force consistency, e.g. runghc -package=bytestring-0.10.0.2 foo.hs.Never use GHCi, runghc, or ghc directly from the command line. Instead, always create a cabal file first. cabal's default dependency resolution will force consistent package loading.Don't wind up in the situation in the first place, by ensuring you only have one version of each package installed.That last point is what I strongly recommend to all users. And this is exactly
the design goal around Stackage, so it will
hopefully not come as a surprise that that's exactly what I recommend most
Haskell users use to get their packages installed.Let's demonstrate that second case of MonadTrans. This time, let's try it
with GHC 7.8.3. GHC ships with
transformers-0.3.0.0. Next, we'll install the either package with cabal
install either. Once again, someone comes along and tells us about a shiny new
package, transformers-0.4.1.0. Dutifully, we upgrade with cabal install
transformers-0.4.1.0. And then we try to run the following simple program:import Control.Monad.Trans.Class
import Control.Monad.Trans.Either
main :: IO ()
main = do
x <- runEitherT $ lift $ putStrLn "hey there!"
print (x :: Either () ())GHC mocks you with:foo.hs:6:23:
No instance for (MonadTrans (EitherT ()))
arising from a use of ‘lift’
In the expression: lift
In the second argument of ‘($)’, namely
‘lift $ putStrLn "hey there!"’
In a stmt of a 'do' block:
x <- runEitherT $ lift $ putStrLn "hey there!""But EitherT is an instance of MonadTrans!" you insist. That may be true, but it's an instance of the wrong MonadTrans. The either package is built against transformers-0.3.0.0, whereas you've imported lift from transformers-0.4.1.0. This can be worked around as above, with runghc -package=transformers-0.3.0.0 foo.hs. And yet again, my strong recommendation is: use Stackage.There's one more particularly painful thing I need to point out. Some packages
are bundled with GHC, and are depended on by the ghc package. The special
thing about the ghc package is that it cannot be reinstalled without installing
a new version of GHC itself. Any packages depended on by the ghc package
cannot be unregistered without breaking ghc, which would in turn break
libraries like doctest and hint. If you follow these points to conclusion,
this means that you should never upgrade GHC-bundled libraries. I wrote a blog
post on this
topic, and the
takeaway is: please, always support older versions of packages like bytestring,
transformers, and- of course- base.There's one final case I want to mention. Try running cabal install data-default-0.5.1 http-client, and then run the following program:import Data.Default
import Network.HTTP.Client
main :: IO ()
main = withManager defaultManagerSettings $ \man -> do
res <- httpLbs def man
print resYou'll get the irritating error message:foo.hs:6:20:
No instance for (Default Request) arising from a use of ‘def’
In the first argument of ‘httpLbs’, namely ‘def’
In a stmt of a 'do' block: res <- httpLbs def man
In the expression:
do { res <- httpLbs def man;
print res }But if you look at http-client, Request is in fact an instance of
Default. "Alright, I know what's going on here" you say. Certainly there are
two versions of data-default installed, right? Actually, no, that's not the
case. Have a look at the following:$ ghc-pkg list | grep data-default
data-default-0.5.1
data-default-class-0.0.1There's just a single version of each of these packages available. So why are
we getting our mysterious error message? Once again, it's because we have two
versions of the Default class. After data-default version 0.5.1,
data-default split into a number of packages, and the Default class
migrated into data-default-class. http-client defines an instance for
Default from data-default-class. And if you use data-default version
0.5.2 or higher, it will simply re-export that same class, and everything will
work.However, our cabal install command forced the installation of the older
data-default (0.5.1) which defines its own Default typeclass. Therefore, we
end up with two separate Default classes that don't unify. This is a problem
that exists whenever packages are split or joined, which is why you should
embark on such refactorings with great care.As it happens, this is yet another problem that is solved by using Stackage,
since it forces a consistent set of versions for data-default and
data-default-class.http://www.yesodweb.com/blog/2014/09/woes-multiple-package-versionsTue, 23 Sep 2014 00:00:00 +0000Composable Queries in F# 3.0
http://wadler.blogspot.com/2014/09/composable-queries-in-f-30.html
James Cheney speaks to the F#unctional Londoners meet-up about our F# library that implements A Practical Theory of Language-Integrated Query.Philip Wadlertag:blogger.com,1999:blog-9757377.post-3135476871574238437Mon, 22 Sep 2014 21:52:00 +0000Hard Drive Failure
http://r6.ca/blog/20140921T175644Z.html
A few weeks ago my desktop computer suffered catastrophic hard drive failure.
Not only did it not boot, it soon got to the point where even the BIOS would fail to recognise the device.
At first I did not worry too much.
Although I was not doing automatic backups, I was still doing irregular weekly manual backups of my home directory with tarsnap and I had performed one about three days prior.
I was not specifically making backups of my NixOS system and user configuration, but I had some old copies.
The configuration files do not change much and they are less important.
Importantly, I had backups of my tarsnap keys stored in other places, such as my shell account.
While waiting for a replacement drive to arrive, I realized I had a serious problem.
My tarsnap keys were encrypted with my PGP key.
I had two specific places where I kept backup of my PGP keys.
One place was a USB drive in a safe deposit box.
However, at some point I had taken that one out to update it, and then misplaced it before putting it back.
Naturally, I had been meaning to get around to replacing that USB drive and the data on it, for some number of years.
Also, to my surprise, I had never actually placed my PGP key in my secondary spot.
I was sunk.
I had some very old hard drive images with older versions of my PGP key on it, but because I rotate my encryption keys every five years, they were not useful.
Within the last five years I had started using full disk encryption.
I had some newer hard drive images that also have my PGP keys but I need the passphrases to decrypt these images.
I had copies of the passphrase around, but, of course, they were all encrypted with my PGP keys.
After an emotional day and some meditation, slowly my old passphrase came back to me and I was able to decrypt one of my disk images.
I was able to rescue my PGP keys and from there I was able to recover everything I had.
I plan to get a bit more serious about distributing copies of my PGP key since I use it so widely.
With my PGP key I should be able to always recover everything since I keep all my other key material encrypted with it.
Instead of a single USB drive in a safe deposit box, I want to keep two identical USB keys, one at home and one in the box.
When I want to update the data, I will update the one at home, swap it with the one in the box, and update the second one and keep it at home until the next update is needed.
I have also gotten more serious about automatic backup.
Turns out that NixOS already comes with a tarsnap system service.
All that one has to do is place one’s write-only tarsnap key in the appropriate place and specify which directories to back up.
I am hoping to make system recovery even easier by also backing up my ~/.nix-profile/manifest.nix, /root/.nix-profile/manifest.nix,/nix/var/nix/profiles/default/manifest.nix, /nix/var/nix/profiles/per-user/*/profile/manifest.nix, /etc/nixos and /var/lib.There are probably a few more things I should back up, like my user profiles, but I am not yet sure how best to do that.
I also want to restart my programme of escrow for my passwords in case something happens to me.
I need to improve my documentation of password list to make it easier for others to use.
I will use ssss to split my master password and distribute among my escrow agents.
The nice thing about public-key cryptography is that I can assign escrow agents without requiring anything from them beyond the fact that they already possess and use PGP keys.
I do not even need to inform them that they are my escrow agents.
The encrypted components will be stored on my USB drive in the safe deposit box.
Overall, I am glad to have narrowly avoid disaster and have definitely learned some lessons.
Check your backup policy everyone!http://r6.ca/blog/20140921T175644Z.htmlSun, 21 Sep 2014 17:56:44 +0000Postdoctoral Position in Programming Languages at KU Leuven
http://tomschrijvers.blogspot.com/2014/09/postdoctoral-position-in-programming.html
The Declarative Languages and Artificial Intelligence (DTAI) group of KU Leuven(Belgium) invites applicants for a postdoctoral position in the area ofprogramming languages. This position has been created at the occasion of thenew appointment of prof. Tom Schrijvers as research professor at KU Leuven. Theposition's aim is to reinforce the research activities in functionalprogramming, logic programming and/or programming language theory.To apply you must hold a recent PhD (or be about to graduate) in one of theabove areas of programming languages. Candidates are expected to havehigh-quality publications in peer-reviewed conferences and journals.The postdoc will work closely with prof. Schrijvers and his PhD students,participate in ongoing research activities and enjoy the freedom to develop newlines of research.The position is for 2 x 1 year and can be further extended. The salary iscompetitive and the starting date negotiable. Moreover, KU Leuven's policy ofequal opportunities and diversity applies to this position.Please send your application to prof. Tom Schrijvers (tom dot schrijvers at csdot kuleuven dot be) by October 15, 2014. Your application should contain: - A cover letter explaining your interest in the position. - Your curriculum vitae. - A short research statement (max. 3 pages). - The names and contact details of three people who can, if asked, write letters of reference.Tom Schrijverstag:blogger.com,1999:blog-5844006452378085451.post-6921410817311972203Mon, 22 Sep 2014 15:25:00 +0000Using my Kobo eBook reader as an external eInk monitor
http://www.joachim-breitner.de/blog/660-Using_my_Kobo_eBook_reader_as_an_external_eInk_monitor
Joachim Breitnerhttp://www.joachim-breitner.de/blog/660-Using_my_Kobo_eBook_reader_as_an_external_eInk_monitorSun, 21 Sep 2014 20:11:07 +0000Introduction to Dependent Types: Off, Off to Agda Land
http://jozefg.bitbucket.org/posts/2014-09-21-what-are-dep-types-2.html
http://jozefg.bitbucket.org/posts/2014-09-21-what-are-dep-types-2.htmlSun, 21 Sep 2014 00:00:00 +0000shell-conduit: Write shell scripts in Haskell with Conduit
http://chrisdone.com/posts/shell-conduit
http://chrisdone.com/posts/shell-conduitSun, 21 Sep 2014 00:00:00 +0000Concurrency, Actors, Locks and mailboxes
http://wellquite.org/blog/actors_locks_and_mailboxes.html
http://wellquite.org/blog/actors_locks_and_mailboxesSat, 20 Sep 2014 14:02:00 +0000Welcome
http://wellquite.org/blog/welcome.html
http://wellquite.org/blog/welcomeSat, 20 Sep 2014 12:32:00 +0000First post!
http://wellquite.org/blog/firstpost.html
http://wellquite.org/blog/firstpostSat, 20 Sep 2014 10:34:00 +0000Rust lifetimes: Getting away with things that would be reckless in C++
http://www.randomhacks.net/2014/09/19/rust-lifetimes-reckless-cxx/
http://www.randomhacks.net/2014/09/19/rust-lifetimes-reckless-cxx/Sat, 20 Sep 2014 03:07:23 +0000hindent: A Haskell indenter
http://chrisdone.com/posts/hindent
http://chrisdone.com/posts/hindentSat, 20 Sep 2014 00:00:00 +0000Formatting in Haskell
http://chrisdone.com/posts/formatting
http://chrisdone.com/posts/formattingSat, 20 Sep 2014 00:00:00 +0000A new Haskell IDE!
http://jpmoresmau.blogspot.com/2014/09/a-new-haskell-ide.html
Well, that title is click-bait. It's only a proof of concept, so far (-:, sorry!I wanted to play with Threepenny-GUI since it allowed to do UI the FRP way, without having the troubles of getting a proper UI library to work. And I was going through a period of frustration with EclipseFP, so I thought about something else for a while... It's got the romantic name of HWIDE!So in fact, this is a very simple integration of CodeMirror and Threepenny-GUI, to be able to edit Haskell sources inside your browser. When you save your buffer, the data is written to disk, and if a cabal file could be found for that source file, a cabal build (in a sandbox) is attempted (with a configure if needed). The output is then parsed by code ripped off BuildWrapper, and the errors/warnings are displayed, so you can click on them and see the offending line in the code.That's all, really, and even that is not 100% perfect, but it's a start. I could get to play a bit with Events and Behaviors, develop some little widgets. If some FRP experts want to have a look at the code and offer some advice, I'd be all ears!I probably won't have much time to turn this into the next generation Haskell IDE, but please fork and hack to your heart's content! The repository is at https://github.com/JPMoresmau/dbIDE.Happy Haskell Hacking!JP Moresmautag:blogger.com,1999:blog-37404288.post-7208571166364166916Fri, 19 Sep 2014 13:15:00 +0000Derailing
http://conway.rutgers.edu/~ccshan/wiki/blog/posts/Derailing/
http://conway.rutgers.edu/~ccshan/wiki/blog/posts/Derailing/Thu, 18 Sep 2014 17:33:16 +0000Learning Scala via unit tests
http://jpmoresmau.blogspot.com/2014/09/learning-scala-via-unit-tests.html
Last month I followed a big data training session, and we used Scala to write algorithms for Spark. I had looked at Scala some years ago but I think at the time the Eclipse support wasn't very good (the pot calling the kettle black, eh?), and it piqued my curiosity again. So I started looking at tutorials, and found Scala Labs. The idea is interesting: you get a project with sources and tests, and you need to make all the tests pass. The source code for both the code to change and the unit tests is heavily documented, and guides you through the language nicely. And seeing the green ticks appear always triggers the proper reward areas of my brain (-:I had a little issue getting the code to compile using Eclipse, since there's a sbt-launch-0.13.0.jar library at the root of the project, and Eclipse used that as the top library, and it had a weird version of the List class, that wasn't genericized! I removed that jar from the class path and everything worked fine.I'm not aware of a similar tutorial for Haskell, but that would be a good idea!JP Moresmautag:blogger.com,1999:blog-37404288.post-3519106201960005903Thu, 18 Sep 2014 17:07:00 +0000Non-diffusive atmospheric flow #6: principal components analysis
http://www.skybluetrades.net/blog/posts/2014/09/18/data-analysis-ao1-6/index.html
http://www.skybluetrades.net/blog/posts/2014/09/18/data-analysis-ao1-6/index.htmlThu, 18 Sep 2014 13:39:03 +0000Raw system calls for Rust
http://mainisusuallyafunction.blogspot.com/2014/09/raw-system-calls-for-rust.html
I wrote a small library for making raw system calls from Rust programs. It provides a macro that expands into in-line system call instructions, with no run-time dependencies at all. Here's an example:#![feature(phase)]#[phase(plugin, link)]extern crate syscall;fn write(fd: uint, buf: &[u8]) { unsafe { syscall!(WRITE, fd, buf.as_ptr(), buf.len()); }}fn main() { write(1, "Hello, world!\n".as_bytes());}Right now it only supports x86-64 Linux, but I'd love to add other platforms. Pull requests are much appreciated. :)keegantag:blogger.com,1999:blog-1563623855220143059.post-760465688770961358Thu, 18 Sep 2014 00:33:00 +0000The military and the referendum.
http://wadler.blogspot.com/2014/09/the-military-and-referendum.html
Many readers will have heard about Lord Dannat in the Telegraph arguing a vote for independence will dishonour Scotland's war dead.Perhaps not as many will have heard that Jimmy Sinclair (the oldest surviving Desert Rat, aged 102), Colin May (Lieutenant Commander, Faslane), and sixteen others have written a letter slamming Dannat; at least, I didn't hear until this morning. "How dare he take their sacrifice in vain and try to turn it to political advantage?"Both sides are reported by the BBC, though the headline mentions only one. (More #bbcbias?)Philip Wadlertag:blogger.com,1999:blog-9757377.post-3822105533910406177Wed, 17 Sep 2014 20:41:00 +0000Deploying Rust applications to Heroku, with example code for Iron
http://www.randomhacks.net/2014/09/17/deploying-rust-heroku-iron/
http://www.randomhacks.net/2014/09/17/deploying-rust-heroku-iron/Wed, 17 Sep 2014 20:35:11 +0000The case for Europe
http://wadler.blogspot.com/2014/09/the-case-for-europe.html
Readers of this list will know that I don't always see eye-to-eye with Alex Salmond. Nonetheless, I think he put the case for Europe well this morning on the Today Program.In a BBC interview just the other night, a spanish minister beinginterviewed by Kirsty Wark despite being invited three or four timesrefused to say that Spain would attempt to veto Scottishmembership. And the reason for that of course is that the Spanishgovernment have already said that in the circumstance of a consenteddemocratic referendum, as they put it, Spain would have nothing to sayabout it.We can go through legal opinion and expert opinion as much as we like.I think the answer is in four figures: 1, 20, 25, and 60.1% is the percentage of Scotland's population compared to the European Union.20% is the percentage of the fish stocks of the entire European Union.25% is the percentage of the renewable energy of the entire European Union offshore.And 60% is the oil reserves that Scotland has.Anyone who believes that a country [with these resources] is notgoing to be welcome in the wider Europe doesn't understand the processby which Europe accepts democratic results and that Scotland has ahuge amount of attractiveness to the rest of the European continent.You can hear the original here, the relevant segment starts at around 8:00.Philip Wadlertag:blogger.com,1999:blog-9757377.post-8017186718351241024Wed, 17 Sep 2014 20:01:00 +0000Beginner error messages in C++ vs Haskell
http://izbicki.me/blog/error-messages-in-ghc-vs-g%2B%2B.html
http://izbicki.me/blog/error-messages-in-ghc-vs-g%2B%2B.htmlWed, 17 Sep 2014 00:00:00 +0000Towards Shake 1.0
http://neilmitchell.blogspot.com/2014/09/towards-shake-10.html
Neil Mitchelltag:blogger.com,1999:blog-7094652.post-967563694980911478Tue, 16 Sep 2014 20:54:00 +0000A challenge for a better community
http://conway.rutgers.edu/~ccshan/wiki/blog/posts/lambda4ada/
http://conway.rutgers.edu/~ccshan/wiki/blog/posts/lambda4ada/Tue, 16 Sep 2014 16:09:11 +0000Scotland: Vote No
http://existentialtype.wordpress.com/2014/09/15/scotland-vote-no/
So far I’ve ignored the back and forth on the Scottish referendum on secession from the United Kingdom, but this weekend I decided that it was past time for me to sort it out. For those of you who don’t know me, I’ll mention that I lived for 3.5 years in Scotland quite some time ago, so I am not completely ignorant of the cultural and political issues that underly the debate. As a rule my political views are very much in line with those of the average Scot, solidly Labour Party back in the day when people like Derek Hatton and Ken Livingston and Roy Hattersley and Tony Benn defined what that meant. Despite Tony Blair’s slimy “third way” nonsense, and his toadying up to Dick “Dick” Cheney’s sock puppet to help lie us into the Iraq war, Scotland in national politics remains solidly Labour; practically every Scottish seat is a Labour seat.
Although I used to be a so up on British politics that I could read and enjoy Private Eye, it’s been a long while since I’ve paid more than scant attention to what’s been going on there, apart from noting that The Scotsman was one of the few sources of truth about the Iraq War back when it really mattered. The Scots have spines.
I’m no historian, but I do have basic understanding of Scottish history, particularly as regards the English, and am very familiar with the Scottish concept of valor in glorious defeat. I understand full well that practically every Scotsman harbors some resentment towards the English for centuries of injustices, including the highland clearances, and, more recently, the appropriation of the oil in Scottish territory for the scant benefit of the Scots themselves. And I am well aware of the bravery and sacrifice that so many Scots made fighting against the Axis during World War II.
My home institution, Carnegie Mellon University, was founded by a Scotsman from Kirkaldy, just across the spectacular Forth Bridge from Edinburgh. Carnegie was born into penury and died as the wealthiest man on earth, far wealthier relative to GDP than Gates by a wide margin. Carnegie was extraordinary, but the Scots in general punch far above their weight class in all things, especially industrious self-reliance.
In short, I love Scotland, and consider it to be a second home. (OK, the weather is appalling, but we’ll set that aside for the time being.)
Emotionally, I am deeply sympathetic to the Scottish independence movement. I know full well how poorly the U.K. treats Scotland and its interests. Politics in the UK revolves around the “home counties” in the south of England; the terminology tells you all you need to know. One time while watching the weather report on the BBC, the national broadcasting network, the announcer said that there was some horrendous weather coming our way, but that “it’ll mostly be up in Scotland, though”. Though. Though.
But I urge all my Scottish friends to vote NO on the independence proposal. It makes no sense whatsoever in its present form, and represents to me a huge scam being perpetrated by the SNP to seize power and impose policies that nearly every Scot, judging from their voting record over decades and decades, would oppose. The whole movement seems driven by the powerful urge to finally stick it to the English and get their country back, and Salmond is exploiting that to the hilt. Back when I lived in Scotland I looked into the SNP, because even then I had separatist sympathies, but when I did, it was obvious why they had so few backers. They’re just Tories without the class structure, more akin to our Tea Party lunatics than to the British Conservatives, and steadfastly opposed to policies, such as well-funded public education, that nearly all Scots support, and determined to follow the post-cold war Slovakian model of slashing taxes on the wealthy in the hope of attracting business to the country. Having not followed Scottish politics for so long, it is astonishing to me that the SNP has managed to gain a majority in the Scottish Parliament, while the voting pattern at the national level has not changed at all. How did this happen? From my position of ignorance of the last decade or so of politics in Scotland, it looks as though Salmond is a slick operator who has pulled off a colossal con by exploiting the nationalist tendencies that lie within every Scot.
But never mind Salmond, the main reason that Scots must vote NO on the referendum is that it proposes to keep the English pound as Scotland’s national currency! This is such a preposterous idea that I can only suspect dishonesty and deceit, because no sane political leader of honest intent could ever voluntarily place his or her country’s economic future in the hands of another. The Bank of England will, particularly after separation, have no interest whatsoever in the economic conditions in Scotland when determining its policies on the pound. And the Bank of Scotland will have no ability to control its own currency, the prime means of maintaining economic balance between labor and capital. The Scots will, in effect, be putting themselves on a gold standard, the stupidest possible monetary system, so that, in a crisis, they will have to buy or borrow pounds, at interest, in emergency conditions, to deal with, say, the failure of the Royal Bank of Scotland (but don’t worry, that sort of thing can never happen again). And the Bank of Scotland will have no means of stimulating the economy in a demand slump other than borrowing pounds from somewhere outside the country, rendering themselves in debt beyond their means. And this will become an excuse for dismantling the social system that has been so important to elevating the Scots from poverty to a decent standard of living within one or two generations. Just look at the poor PIGS in the Euro-zone being pushed around by Germany, especially, to satisfy the conveniences of the German bankers, and to hell with the living, breathing souls in Greece or Spain or Ireland or Portugal, to name the canonical victims.
A country that does not control its own currency is not independent and cannot be independent. It’s an illusion. Just what are Salmond’s true intentions are not entirely clear to me, but on the basis of his monetary policies alone, I implore my Scottish friends to suppress the natural wish to make a statement of pride, and instead do the sensible thing. The proposal to be voted on this week is not a spittle on the Heart of Midlothian, it is an irrevocable decision to place Scotland in an even worse position with respect to England than it already is in.
Listen to reason. Vote NO on independence.Filed under: Research Tagged: Scottish referendumRobert Harperhttp://existentialtype.wordpress.com/?p=998Mon, 15 Sep 2014 15:53:23 +0000GHC Weekly News - 2014/09/15
http://ghc.haskell.org/trac/ghc/blog/weekly20140915
thoughtpolicehttp://ghc.haskell.org/trac/ghc/blog/weekly20140915Mon, 15 Sep 2014 13:47:29 +0000Comparing AVL Trees in C++ and Haskell
http://izbicki.me/blog/avl-tree-runtimes-c%2B%2B-vs-haskell.html
http://izbicki.me/blog/avl-tree-runtimes-c%2B%2B-vs-haskell.htmlMon, 15 Sep 2014 00:00:00 +0000The British Biased Corporation
http://wadler.blogspot.com/2014/09/the-british-biased-corporation.html
Scandalous! Nick Robinson asks Alex Salmond a question, and Salmond takes seven minutes to answer in detail. On the evening news, Nick Robinson summarises Salmond's answer in a few seconds as 'He didn't answer'. (Above spotted via Arc of Prosperity.)And today, this.I used to be a supporter of the BBC, but it's getting harder and harder to justify.Philip Wadlertag:blogger.com,1999:blog-9757377.post-4093290279726999660Sun, 14 Sep 2014 21:55:00 +0000Krugman vs. Stiglitz, now with added Stiglitz
http://wadler.blogspot.com/2014/09/krugman-vs-stiglitz-now-with-added.html
My last post quoted Joe Stiglitz, indirectly, to refute Paul Krugman's fear mongering. Now the man himself has spoken in the Sunday Herald.As Scotland contemplates independence, some, such as Paul Krugman, have questioned the "economics".Would Scotland, going it alone, risk a decline in standards of living or a fall in GDP? There are, to be sure, risks in any course of action: should Scotland stay in the UK, and the UK leave the EU, the downside risks are, by almost any account, significantly greater. If Scotland stays in the UK, and the UK continues in its policies which have resulted in growing inequality, even if GDP were slightly larger, the standards of living of most Scots could fall.Cutbacks in UK public support to education and health could force Scotland to face a set of unpalatable choices - even with Scotland having considerable discretion over what it spends its money on.But there is, in fact, little basis for any of the forms of fear-mongering that have been advanced. Krugman, for instance, suggests that there are significant economies of scale: a small economy is likely, he seems to suggest, not to do well. But an independent Scotland will still be part of Europe, and the great success of the EU is the creation of a large economic zone.Besides, small political entities, like Sweden, Singapore, and Hong Kong have prospered, while much larger entities have not. By an order of magnitude, far more important is pursuit of the right policies.Another example of a non-issue is the currency. There are many currency arrangements that would work. Scotland could continue using sterling - with or without England's consent.Because the economies of England and Scotland are so similar, a common currency is likely to work far better than the euro - even without shared fiscal policy. But many small countries have managed to have a currency of their own - floating, pegged, or "managed."Philip Wadlertag:blogger.com,1999:blog-9757377.post-8376378768845102308Sun, 14 Sep 2014 17:08:00 +0000ghc 7.8.3 and rare architectures
https://gentoohaskell.wordpress.com/2014/09/13/ghc-7-8-3-and-rare-architectures/
After some initially positive experience with ghc-7.8-rc1 I’ve decided to upstream most of gentoo fixes.
On rare arches ghc-7.8.3 behaves a bit bad:
ia64 build stopped being able to link itself after ghc-7.4 (gprel overflow)
on sparc, ia64 and ppc ghc was not able to create working shared libraries
integer-gmp library on ia64 crashed, and we had to use integer-simple
I have written a small story of those fixes here if you are curious.
TL;DR:
To get ghc-7.8.3 working nicer for exotic arches you will need to backport at least the following patches:
integer literals trimming
incorrect -fPIC passing to the assembler
fixed foreign export propotype
Thank you!Sergei Trofimovichhttp://gentoohaskell.wordpress.com/?p=119Sat, 13 Sep 2014 09:03:48 +0000unsafePerformIO and missing NOINLINE
https://gentoohaskell.wordpress.com/2014/06/16/unsafeperformio-and-missing-noinline/
Two months ago Ivan asked me if we had working darcs-2.8 for ghc-7.8 in gentoo. We had a workaround to compile darcs to that day, but darcs did not work reliably. Sometimes it needed 2-3 attempts to pull a repository.
A bit later I’ve decided to actually look at failure case (Issued on darcs bugtracker) and do something about it. My idea to debug the mystery was simple: to reproduce the difference on the same source for ghc-7.6/7.8 and start plugging debug info unless difference I can understand will pop up.
Darcs has great debug-verbose option for most of commands. I used debugMessage function to litter code with more debugging statements unless complete horrible image would emerge.
As you can see in bugtracker issue I posted there various intermediate points of what I thought went wrong (don’t expect those comments to have much sense).
The immediate consequence of a breakage was file overwrite of partially downloaded file. The event timeline looked simple:
darcs scheduled for download the same file twice (two jobs in download queue)
first download job did finish
notified waiter started processing of that downloaded temp file
second download started truncating previous complete download
notified waiter continued processing partially downloadeed file and detected breakage
Thus first I’ve decided to fix the consequence. It did not fix problems completely, sometimes darcs pull complained about remote repositories still being broken (missing files), but it made errors saner (only remote side was allegedly at fault).
Ideally, that file overwrite should not happen in the first place. Partially, it was temp file predictability.
But, OK. Then i’ve started digging why 7.6/7.8 request download patterns were so severely different. At first I thought of new IO manager being a cause of difference. The paper says it fixed haskell thread scheduling issue (paper is nice even for leisure reading!):
GHC’s RTS had a bug in which yield
placed the thread back on the front of the run queue. This bug
was uncovered by our use of yield
which requires that the thread
be placed at the end of the run queue
Thus I was expecting the bug from this side.
Then being determined to dig A Lot in darcs source code I’ve decided to disable optimizations (-O0) to speedup rebuilds. And, the bug has vanished.
That made the click: unsafePerformIO might be the real problem. I’ve grepped for all unsafePerformIO instances and examined all definition sites.
Two were especially interesting:
-- src/Darcs/Util/Global.hs
-- ...
_crcWarningList :: IORef CRCWarningList
_crcWarningList = unsafePerformIO $ newIORef []
{-# NOINLINE _crcWarningList #-}
-- ...
_badSourcesList :: IORef [String]
_badSourcesList = unsafePerformIO $ newIORef []
{- NOINLINE _badSourcesList -}
-- ...
Did you spot the bug?
Thus The Proper Fix was pushed upstream a month ago. Which means ghc is now able to inline things more aggressively (and _badSourcesList were inlined in all user sites, throwing out all update sites).
I don’t know if those newIORef [] can be de-CSEd if types would have the same representation. Ideally the module also needs -fno-cse, or get rid of unsafePerformIO completely :].
(Side thought: top-level global variables in C style are surprisingly non-trivial in "pure" haskell. They are easy to use via peek / poke (in a racy way), but are hard to declare / initialize.)
I had a question wondered how many haskell packages manage to misspell ghc pragma decparations in a way darcs did it. And there still _is_ a few of such offenders:
$ fgrep -R NOINLINE . | grep -v '{-# NOINLINE' | grep '{-'
--
ajhc-0.8.0.10/lib/jhc/Jhc/List.hs:{- NOINLINE filterFB #-}
ajhc-0.8.0.10/lib/jhc/Jhc/List.hs:{- NOINLINE iterateFB #-}
ajhc-0.8.0.10/lib/jhc/Jhc/List.hs:{- NOINLINE mapFB #-}
--
darcs-2.8.4/src/Darcs/Global.hs:{- NOINLINE _badSourcesList -}
darcs-2.8.4/src/Darcs/Global.hs:{- NOINLINE _reachableSourcesList -}
--
dph-lifted-copy-0.7.0.1/Data/Array/Parallel.hs:{- NOINLINE emptyP #-}
--
dph-par-0.5.1.1/Data/Array/Parallel.hs:{- NOINLINE emptyP #-}
--
dph-seq-0.5.1.1/Data/Array/Parallel.hs:{- NOINLINE emptyP #-}
--
freesect-0.8/FreeSectAnnotated.hs:{- # NOINLINE showSSI #-}
freesect-0.8/FreeSectAnnotated.hs:{- # NOINLINE FreeSectAnnotated.showSSI #-}
freesect-0.8/FreeSect.hs:{- # NOINLINE fs_warn_flaw #-}
--
http-proxy-0.0.8/Network/HTTP/Proxy/ReadInt.hs:{- NOINLINE readInt64MH #-}
http-proxy-0.0.8/Network/HTTP/Proxy/ReadInt.hs:{- NOINLINE mhDigitToInt #-}
--
lhc-0.10/lib/base/src/GHC/PArr.hs:{- NOINLINE emptyP #-}
--
property-list-0.1.0.2/src/Data/PropertyList/Binary/Float.hs:{- NOINLINE doubleToWord64 -}
property-list-0.1.0.2/src/Data/PropertyList/Binary/Float.hs:{- NOINLINE word64ToDouble -}
property-list-0.1.0.2/src/Data/PropertyList/Binary/Float.hs:{- NOINLINE floatToWord32 -}
property-list-0.1.0.2/src/Data/PropertyList/Binary/Float.hs:{- NOINLINE word32ToFloat -}
--
warp-2.0.3.3/Network/Wai/Handler/Warp/ReadInt.hs:{- NOINLINE readInt64MH #-}
warp-2.0.3.3/Network/Wai/Handler/Warp/ReadInt.hs:{- NOINLINE mhDigitToInt #-}
Looks like there is yet something to fix :]
Would be great if hlint would be able to detect pragma-like comments and warn when comment contents is a valid pragma, but comment brackets don’t allow it to fire.
{- NOINLINE foo -} -- bad
{- NOINLINE foo #-} -- bad
{-# NOINLINE foo -} -- bad
{-# NOINLINE foo #-} -- ok
Thanks for reading!Sergei Trofimovichhttp://gentoohaskell.wordpress.com/?p=113Mon, 16 Jun 2014 16:10:56 +0000Haskell tools for satellite operations
http://flygdynamikern.blogspot.com/2014/09/haskell-tools-for-satellite-operations.html
At last week's CUFP I did a talk called “Haskell tools for satellite operations”. The abstract is:
Since 2013-04 the presenter has been supporting SSC (the Swedish Space Corporation) in operating the telecommunications satellite “Sirius 3” from its Mission Control Center in Kiruna. Functions in the satellite vendor's operations software are breaking down as the orbit of the ageing satellite degrades. To fill in the gaps in software capabilities the presenter has developed several operational tools using Haskell.
The talk will give an overview of the satellite operations environment, the tools developed in Haskell, how they benefitted (and occasionally suffered) from the choice of implementation language, which (public) libraries were critical to their success, and how they were deployed in the satellite operations environment.
A video recording of the talk is available on the CUFP page for the talk and on youtube.
If this interests you be sure to check out the other talk from the “Functional programming in space!” track; Michael Oswald's Haskell in the Misson Control Domain.Björn Buckwaltertag:blogger.com,1999:blog-2553815923640620572.post-7409096497549783767Fri, 12 Sep 2014 12:46:00 +0000Morte: an intermediate language for super-optimizing functional programs
http://www.haskellforall.com/2014/09/morte-intermediate-language-for-super.html
The Haskell language provides the following guarantee (with caveats): if two programs are equal according to equational reasoning then they will behave the same. On the other hand, Haskell does not guarantee that equal programs will generate identical performance. Consequently, Haskell library writers must employ rewrite rules to ensure that their abstractions do not interfere with performance.Now suppose there were a hypothetical language with a stronger guarantee: if two programs are equal then they generate identical executables. Such a language would be immune to abstraction: no matter how many layers of indirection you might add the binary size and runtime performance would be unaffected.Here I will introduce such an intermediate language named Morte that obeys this stronger guarantee. I have not yet implemented a back-end code generator for Morte, but I wanted to pause to share what I have completed so far because Morte uses several tricks from computer science that I believe deserve more attention.Morte is nothing more than a bare-bones implementation of the calculus of constructions, which is a specific type of lambda calculus. The only novelty is how I intend to use this lambda calculus: as a super-optimizer.NormalizationThe typed lambda calculus possesses a useful property: every term in the lambda calculus has a unique normal form if you beta-reduce everything. If you're new to lambda calculus, normalizing an expression equates to indiscriminately inlining every function call.What if we built a programming language whose intermediate language was lambda calculus? What if optimization was just normalization of lambda terms (i.e. indiscriminate inlining)? If so, then we would could abstract freely, knowing that while compile times might increase, our final executable would never change.RecursionNormally you would not want to inline everything because infinitely recursive functions would become infinitely large expressions. Fortunately, we can often translate recursive code to non-recursive code!I'll demonstrate this trick first in Haskell and then in Morte. Let's begin from the following recursive List type along with a recursive map function over lists:import Prelude hiding (map, foldr)data List a = Cons a (List a) | Nilexample :: List Intexample = Cons 1 (Cons 2 (Cons 3 Nil))map :: (a -> b) -> List a -> List bmap f Nil = Nilmap f (Cons a l) = Cons (f a) (map f l)-- Argument order intentionally switchedfoldr :: List a -> (a -> x -> x) -> x -> xfoldr Nil c n = nfoldr (Cons a l) c n = c a (foldr l c n)result :: Intresult = foldr (map (+1) example) (+) 0-- result = 9Now imagine that we disable all recursion in Haskell: no more recursive types and no more recursive functions. Now we must reject the above program because:the List data type definition recursively refers to itselfthe map and foldr functions recursively refer to themselvesCan we still encode lists in a non-recursive dialect of Haskell?Yes, we can!-- This is a valid Haskell program{-# LANGUAGE RankNTypes #-}import Prelude hiding (map, foldr)type List a = forall x . (a -> x -> x) -> x -> xexample :: List Intexample = \cons nil -> cons 1 (cons 2 (cons 3 nil))map :: (a -> b) -> List a -> List bmap f l = \cons nil -> l (\a x -> cons (f a) x) nilfoldr :: List a -> (a -> x -> x) -> x -> xfoldr l = lresult :: Intresult = foldr (map (+ 1) example) (+) 0-- result = 9Carefully note that:List is no longer defined recursively in terms of itselfmap and foldr are no longer defined recursively in terms of themselvesYet, we somehow managed to build a list, map a function over the list, and fold the list, all without ever using recursion! We do this by encoding the list as a fold, which is why foldr became the identity function.This trick works for more than just lists. You can take any recursive data type and mechanically transform the type into a fold and transform functions on the type into functions on folds. If you want to learn more about this trick, the specific name for it is "Boehm-Berarducci encoding". If you are curious, this in turn is equivalent to an even more general concept from category theory known as "F-algebras", which let you encode inductive things in a non-inductive way.Non-recursive code greatly simplifies equational reasoning. For example, we can easily prove that we can optimize map id l to l:map id l-- Inline: map f l = \cons nil -> l (\a x -> cons (f a) x) nil= \cons nil -> l (\a x -> cons (id a) x) nil-- Inline: id x = x= \cons nil -> l (\a x -> cons a x) nil-- Eta-reduce= \cons nil -> l cons nil-- Eta-reduce= lNote that we did not need to use induction to prove this optimization because map is no longer recursive. The optimization became downright trivial, so trivial that we can automate it!Morte optimizes programs using this same simple scheme:Beta-reduce everything (equivalent to inlining)Eta-reduce everythingTo illustrate this, I will desugar our high-level Haskell code to the calculus of constructions. This desugaring process is currently manual (and tedious), but I plan to automate this, too, by providing a front-end high-level language similar to Haskell that compiles to Morte:-- mapid.mt( \(List : * -> *)-> \( map : forall (a : *) -> forall (b : *) -> (a -> b) -> List a -> List b )-> \(id : forall (a : *) -> a -> a) -> \(a : *) -> map a a (id a))-- List(\(a : *) -> forall (x : *) -> (a -> x -> x) -> x -> x)-- map( \(a : *)-> \(b : *)-> \(f : a -> b)-> \(l : forall (x : *) -> (a -> x -> x) -> x -> x)-> \(x : *)-> \(Cons : b -> x -> x)-> \(Nil: x)-> l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil)-- id(\(a : *) -> \(va : a) -> va)This line of code is the "business end" of the program:\(a : *) -> map a a (id a)The extra 'a' business is because in any polymorphic lambda calculus you explicitly accept polymorphic types as arguments and specialize functions by applying them to types. Higher-level functional languages like Haskell or ML use type inference to automatically infer and supply type arguments when possible.We can compile this program using the morte executable, which accepts a Morte program on stdin, outputs the program's type stderr, and outputs the optimized program on stdout:$ morte < id.mt∀(a : *) → (∀(x : *) → (a → x → x) → x → x) → ∀(x : *) → (a → x → x) → x → xλ(a : *) → λ(l : ∀(x : *) → (a → x → x) → x → x) → lThe first line is the type, which is a desugared form of:forall a . List a -> List aThe second line is the program, which is the identity function on lists. Morte optimized away the map completely, the same way we did by hand.Morte also optimized away the rest of the code, too. Dead-code elimination is just an emergent property of Morte's simple optimization scheme.EqualityWe could double-check our answer by asking Morte to optimize the identity function on lists:-- idlist.mt( \(List : * -> *)-> \(id : forall (a : *) -> a -> a) -> \(a : *) -> id (List a))-- List(\(a : *) -> forall (x : *) -> (a -> x -> x) -> x -> x)-- id(\(a : *) -> \(va : a) -> va)Sure enough, Morte outputs an alpha-equivalent result (meaning the same up to variable renaming):$ ~/.cabal/bin/morte < idlist.mt∀(a : *) → (∀(x : *) → (a → x → x) → x → x) → ∀(x : *) → (a → x → x) → x → xλ(a : *) → λ(va : ∀(x : *) → (a → x → x) → x → x) → vaWe can even use the morte library to mechanically check if two Morte expressions are alpha-, beta-, and eta- equivalent. We can parse our two Morte files into Morte's Expr type and then use the Eq instance for Expr to test for equivalence:$ ghciPrelude> import qualified Data.Text.Lazy.IO as TextPrelude Text> txt1 <- Text.readFile "mapid.mt"Prelude Text> txt2 <- Text.readFile "idlist.mt"Prelude Text> import Morte.Parser (exprFromText)Prelude Text Morte.Parser> let e1 = exprFromText txt1Prelude Text Morte.Parser> let e2 = exprFromText txt2Prelude Text Morte.Parser> import Control.Applicative (liftA2)Prelude Text Morte.Parser Control.Applicative> liftA2 (==) e1 e2Right True$ -- `Right` means both expressions parsed successfully$ -- `True` means they are alpha-, beta-, and eta-equivalentWe can use this to mechanically verify that two Morte programs optimize to the same result.Compile-time computationMorte can compute as much (or as little) at compile as you want. The more information you encode directly within lambda calculus, the more compile-time computation Morte will perform for you. For example, if we translate our Haskell List code entirely to lambda calculus, then Morte will statically compute the result at compile time.-- nine.mt( \(Nat : *)-> \(zero : Nat)-> \(one : Nat)-> \((+) : Nat -> Nat -> Nat)-> \((*) : Nat -> Nat -> Nat)-> \(List : * -> *)-> \(Cons : forall (a : *) -> a -> List a -> List a)-> \(Nil : forall (a : *) -> List a)-> \( map : forall (a : *) -> forall (b : *) -> (a -> b) -> List a -> List b )-> \( foldr : forall (a : *) -> List a -> forall (r : *) -> (a -> r -> r) -> r -> r )-> ( \(two : Nat) -> \(three : Nat) -> ( \(example : List Nat) -> foldr Nat (map Nat Nat ((+) one) example) Nat (+) zero ) -- example (Cons Nat one (Cons Nat two (Cons Nat three (Nil Nat)))) ) -- two ((+) one one) -- three ((+) one ((+) one one)))-- Nat( forall (a : *)-> (a -> a)-> a-> a)-- zero( \(a : *)-> \(Succ : a -> a)-> \(Zero : a)-> Zero)-- one( \(a : *)-> \(Succ : a -> a)-> \(Zero : a)-> Succ Zero)-- (+)( \(m : forall (a : *) -> (a -> a) -> a -> a)-> \(n : forall (a : *) -> (a -> a) -> a -> a)-> \(a : *)-> \(Succ : a -> a)-> \(Zero : a)-> m a Succ (n a Succ Zero))-- (*)( \(m : forall (a : *) -> (a -> a) -> a -> a)-> \(n : forall (a : *) -> (a -> a) -> a -> a)-> \(a : *)-> \(Succ : a -> a)-> \(Zero : a)-> m a (n a Succ) Zero)-- List( \(a : *)-> forall (x : *)-> (a -> x -> x) -- Cons-> x -- Nil-> x)-- Cons( \(a : *)-> \(va : a)-> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)-> \(x : *)-> \(Cons : a -> x -> x)-> \(Nil : x)-> Cons va (vas x Cons Nil))-- Nil( \(a : *)-> \(x : *)-> \(Cons : a -> x -> x)-> \(Nil : x)-> Nil)-- map( \(a : *)-> \(b : *)-> \(f : a -> b)-> \(l : forall (x : *) -> (a -> x -> x) -> x -> x)-> \(x : *)-> \(Cons : b -> x -> x)-> \(Nil: x)-> l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil)-- foldr( \(a : *)-> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x)-> vas)The relevant line is:foldr Nat (map Nat Nat ((+) one) example) Nat (+) zeroIf you remove the type-applications to Nat, this parallels our original Haskell example. We can then evaluate this expression at compile time:$ morte < nine.mt∀(a : *) → (a → a) → a → aλ(a : *) → λ(Succ : a → a) → λ(Zero : a) → Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ Zero))))))))Morte reduces our program to a church-encoded nine.Run-time computationMorte does not force you to compute everything using lambda calculus at compile time. Suppose that we wanted to use machine arithmetic at run-time instead. We can do this by parametrizing our program on:the Int type,operations on Ints, andany integer literals we useWe accept these "foreign imports" as ordinary arguments to our program:-- foreign.mt-- Foreign imports \(Int : *) -- Foreign type-> \((+) : Int -> Int -> Int) -- Foreign function-> \((*) : Int -> Int -> Int) -- Foreign function-> \(lit@0 : Int) -- Literal "1" -- Foreign data-> \(lit@1 : Int) -- Literal "2" -- Foreign data-> \(lit@2 : Int) -- Literal "3" -- Foreign data-> \(lit@3 : Int) -- Literal "1" -- Foreign data-> \(lit@4 : Int) -- Literal "0" -- Foreign data-- The rest is compile-time lambda calculus-> ( \(List : * -> *) -> \(Cons : forall (a : *) -> a -> List a -> List a) -> \(Nil : forall (a : *) -> List a) -> \( map : forall (a : *) -> forall (b : *) -> (a -> b) -> List a -> List b ) -> \( foldr : forall (a : *) -> List a -> forall (r : *) -> (a -> r -> r) -> r -> r ) -> ( \(example : List Int) -> foldr Int (map Int Int ((+) lit@3) example) Int (+) lit@4 ) -- example (Cons Int lit@0 (Cons Int lit@1 (Cons Int lit@2 (Nil Int)))) ) -- List ( \(a : *) -> forall (x : *) -> (a -> x -> x) -- Cons -> x -- Nil -> x ) -- Cons ( \(a : *) -> \(va : a) -> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x) -> \(x : *) -> \(Cons : a -> x -> x) -> \(Nil : x) -> Cons va (vas x Cons Nil) ) -- Nil ( \(a : *) -> \(x : *) -> \(Cons : a -> x -> x) -> \(Nil : x) -> Nil ) -- map ( \(a : *) -> \(b : *) -> \(f : a -> b) -> \(l : forall (x : *) -> (a -> x -> x) -> x -> x) -> \(x : *) -> \(Cons : b -> x -> x) -> \(Nil: x) -> l x (\(va : a) -> \(vx : x) -> Cons (f va) vx) Nil ) -- foldr ( \(a : *) -> \(vas : forall (x : *) -> (a -> x -> x) -> x -> x) -> vas )We can use Morte to optimize the above program and Morte will reduce the program to nothing but foreign types, operations, and values:$ morte < foreign.mt∀(Int : *) → (Int → Int → Int) → (Int → Int → Int) → Int →Int → Int → Int → Int → Intλ(Int : *) → λ((+) : Int → Int → Int) → λ((*) : Int → Int → Int) → λ(lit : Int) → λ(lit@1 : Int) → λ(lit@2 : Int) → λ(lit@3 : Int) → λ(lit@4 : Int) → (+) ((+) lit@3 lit) ((+) ((+) lit@3 lit@1) ((+) ((+) lit@3 lit@2) lit@4))If you study that closely, Morte adds lit@3 (the "1" literal) to each literal of the list and then adds them up. We can then pass this foreign syntax tree to our machine arithmetic backend to transform those foreign operations to efficient operations.Morte lets you choose how much information you want to encode within lambda calculus. The more information you encode in lambda calculus the more Morte can optimize your program, but the slower your compile times will get, so it's a tradeoff.CorecursionCorecursion is the dual of recursion. Where recursion works on finite data types, corecursion works on potentially infinite data types. An example would be the following infinite Stream in Haskell:data Stream a = Cons a (Stream a)numbers :: Stream Intnumbers = go 0 where go n = Cons n (go (n + 1))-- numbers = Cons 0 (Cons 1 (Cons 2 (...map :: (a -> b) -> Stream a -> Stream bmap f (Cons a l) = Cons (f a) (map f l)example :: Stream Intexample = map (+ 1) numbers-- example = Cons 1 (Cons 2 (Cons 3 (...Again, pretend that we disable any function from referencing itself so that the above code becomes invalid. This time we cannot reuse the same trick from previous sections because we cannot encode numbers as a fold without referencing itself. Try this if you don't believe me.However, we can still encode corecursive things in a non-corecursive way. This time, we encode our Stream type as an unfold instead of a fold:-- This is also valid Haskell code{-# LANGUAGE ExistentialQuantification #-}data Stream a = forall s . MkStream { seed :: s , step :: s -> (a, s) }numbers :: Stream Intnumbers = MkStream 0 (\n -> (n, n + 1))map :: (a -> b) -> Stream a -> Stream bmap f (MkStream s0 k) = MkStream s0 k' where k' s = (f a, s') where (a, s') = k s In other words, we store an initial seed of some type s and a step function of type s -> (a, s) that emits one element of our Stream. The type of our seed s can be anything and in our numbers example, the type of the internal state is Int. Another stream could use a completely different internal state of type (), like this:-- ones = Cons 1 onesones :: Stream Intones = MkStream () (\_ -> (1, ()))The general name for this trick is an "F-coalgebra" encoding of a corecursive type.Once we encode our infinite stream non-recursively, we can safely optimize the stream by inlining and eta reduction:map id l-- l = MkStream s0 k= map id (MkStream s0 k)-- Inline definition of `map`= MkStream s0 k' where k' = \s -> (id a, s') where (a, s') = k s-- Inline definition of `id`= MkStream s0 k' where k' = \s -> (a, s') where (a, s') = k s-- Inline: (a, s') = k s= MkStream s0 k' where k' = \s -> k s-- Eta reduce= MkStream s0 k' where k' = k-- Inline: k' = k= MkStream s0 k-- l = MkStream s0 k= lNow let's encode Stream and map in Morte and compile the following four expressions:map ididmap f . map gmap (f . g)Save the following Morte file to stream.mt and then uncomment the expression you want to test:( \(id : forall (a : *) -> a -> a)-> \( (.) : forall (a : *) -> forall (b : *) -> forall (c : *) -> (b -> c) -> (a -> b) -> (a -> c) )-> \(Pair : * -> * -> *)-> \(P : forall (a : *) -> forall (b : *) -> a -> b -> Pair a b)-> \( first : forall (a : *) -> forall (b : *) -> forall (c : *) -> (a -> b) -> Pair a c -> Pair b c )-> ( \(Stream : * -> *) -> \( map : forall (a : *) -> forall (b : *) -> (a -> b) -> Stream a -> Stream b ) -- example@1 = example@2 -> ( \(example@1 : forall (a : *) -> Stream a -> Stream a) -> \(example@2 : forall (a : *) -> Stream a -> Stream a) -- example@3 = example@4 -> \( example@3 : forall (a : *) -> forall (b : *) -> forall (c : *) -> (b -> c) -> (a -> b) -> Stream a -> Stream c ) -> \( example@4 : forall (a : *) -> forall (b : *) -> forall (c : *) -> (b -> c) -> (a -> b) -> Stream a -> Stream c ) -- Uncomment the example you want to test -> example@1-- -> example@2-- -> example@3-- -> example@4 ) -- example@1 (\(a : *) -> map a a (id a)) -- example@2 (\(a : *) -> id (Stream a)) -- example@3 ( \(a : *) -> \(b : *) -> \(c : *) -> \(f : b -> c) -> \(g : a -> b) -> map a c ((.) a b c f g) ) -- example@4 ( \(a : *) -> \(b : *) -> \(c : *) -> \(f : b -> c) -> \(g : a -> b) -> (.) (Stream a) (Stream b) (Stream c) (map b c f) (map a b g) ) ) -- Stream ( \(a : *) -> forall (x : *) -> (forall (s : *) -> s -> (s -> Pair a s) -> x) -> x ) -- map ( \(a : *) -> \(b : *) -> \(f : a -> b) -> \( st : forall (x : *) -> (forall (s : *) -> s -> (s -> Pair a s) -> x) -> x ) -> \(x : *) -> \(S : forall (s : *) -> s -> (s -> Pair b s) -> x) -> st x ( \(s : *) -> \(seed : s) -> \(step : s -> Pair a s) -> S s seed (\(seed@1 : s) -> first a b s f (step seed@1)) ) ))-- id(\(a : *) -> \(va : a) -> va)-- (.)( \(a : *)-> \(b : *)-> \(c : *)-> \(f : b -> c)-> \(g : a -> b)-> \(va : a)-> f (g va))-- Pair(\(a : *) -> \(b : *) -> forall (x : *) -> (a -> b -> x) -> x)-- P( \(a : *)-> \(b : *)-> \(va : a)-> \(vb : b)-> \(x : *)-> \(P : a -> b -> x)-> P va vb)-- first( \(a : *)-> \(b : *)-> \(c : *)-> \(f : a -> b)-> \(p : forall (x : *) -> (a -> c -> x) -> x)-> \(x : *)-> \(Pair : b -> c -> x)-> p x (\(va : a) -> \(vc : c) -> Pair (f va) vc))Both example@1 and example@2 will generate alpha-equivalent code:$ morte < example1.mt∀(a : *) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → xλ(a : *) → λ(st : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → st$ morte < example2.mt∀(a : *) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → xλ(a : *) → λ(va : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → vaSimilarly, example@3 and example@4 will generate alpha-equivalent code:$ morte < example3.mt∀(a : *) → ∀(b : *) → ∀(c : *) → (b → c) → (a → b) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (c → s → x) → x) → x) → xλ(a : *) → λ(b : *) → λ(c : *) → λ(f : b → c) → λ(g : a → b) → λ(st : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → λ(x : *) → λ(S : ∀(s : *) → s → (s → ∀(x : *) → (c → s → x) → x) → x) → st x (λ(s : *) → λ(seed : s) → λ(step : s → ∀(x : *) → (a → s → x) → x) → S s seed (λ(seed@1 : s) → λ(x : *) → λ(Pair : c → s → x) → step seed@1 x (λ(va : a) → Pair (f (g va)))))$ morte < example4.mt∀(a : *) → ∀(b : *) → ∀(c : *) → (b → c) → (a → b) → (∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (c → s → x) → x) → x) → xλ(a : *) → λ(b : *) → λ(c : *) → λ(f : b → c) → λ(g : a → b) → λ(va : ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (a → s → x) → x) → x) → x) → λ(x : *) → λ(S : ∀(s : *) → s → (s → ∀(x : *) → (c → s → x) → x) → x) → va x (λ(s : *) → λ(seed : s) → λ(step : s → ∀(x : *) → (a → s → x) → x) → S s seed (λ(seed@1 : s) → λ(x : *) → λ(Pair : c → s → x) → step seed@1 x (λ(va : a) → Pair (f (g va))))We inadvertently proved stream fusion for free, but we're still not done, yet! Everything we learn about recursive and corecursive sequences can be applied to model recursive and corecursive effects!EffectsI will conclude this post by showing how to model both recursive and corecursive programs that have side effects. The recursive program will echo ninety-nine lines from stdin to stdout. The equivalent Haskell program is in the comment header:-- recursive.mt-- The Haskell code we will translate to Morte:---- import Prelude hiding (-- (+), (*), IO, putStrLn, getLine, (>>=), (>>), return )-- -- -- Simple prelude---- data Nat = Succ Nat | Zero---- zero :: Nat-- zero = Zero---- one :: Nat-- one = Succ Zero---- (+) :: Nat -> Nat -> Nat-- Zero + n = n-- Succ m + n = m + Succ n---- (*) :: Nat -> Nat -> Nat-- Zero * n = Zero-- Succ m * n = n + (m * n)---- foldNat :: Nat -> (a -> a) -> a -> a-- foldNat Zero f x = x-- foldNat (Succ m) f x = f (foldNat m f x)---- data IO r-- = PutStrLn String (IO r)-- | GetLine (String -> IO r)-- | Return r---- putStrLn :: String -> IO U-- putStrLn str = PutStrLn str (Return Unit)---- getLine :: IO String-- getLine = GetLine Return---- return :: a -> IO a-- return = Return---- (>>=) :: IO a -> (a -> IO b) -> IO b-- PutStrLn str io >>= f = PutStrLn str (io >>= f)-- GetLine k >>= f = GetLine (\str -> k str >>= f)-- Return r >>= f = f r---- -- Derived functions---- (>>) :: IO U -> IO U -> IO U-- m >> n = m >>= \_ -> n---- two :: Nat-- two = one + one---- three :: Nat-- three = one + one + one---- four :: Nat-- four = one + one + one + one---- five :: Nat-- five = one + one + one + one + one---- six :: Nat-- six = one + one + one + one + one + one---- seven :: Nat-- seven = one + one + one + one + one + one + one---- eight :: Nat-- eight = one + one + one + one + one + one + one + one---- nine :: Nat-- nine = one + one + one + one + one + one + one + one + one---- ten :: Nat-- ten = one + one + one + one + one + one + one + one + one + one---- replicateM_ :: Nat -> IO U -> IO U-- replicateM_ n io = foldNat n (io >>) (return Unit)---- ninetynine :: Nat-- ninetynine = nine * ten + nine---- main_ :: IO U-- main_ = getLine >>= putStrLn-- "Free" variables( \(String : * )-> \(U : *)-> \(Unit : U) -- Simple prelude-> ( \(Nat : *) -> \(zero : Nat) -> \(one : Nat) -> \((+) : Nat -> Nat -> Nat) -> \((*) : Nat -> Nat -> Nat) -> \(foldNat : Nat -> forall (a : *) -> (a -> a) -> a -> a) -> \(IO : * -> *) -> \(return : forall (a : *) -> a -> IO a) -> \((>>=) : forall (a : *) -> forall (b : *) -> IO a -> (a -> IO b) -> IO b ) -> \(putStrLn : String -> IO U) -> \(getLine : IO String) -- Derived functions -> ( \((>>) : IO U -> IO U -> IO U) -> \(two : Nat) -> \(three : Nat) -> \(four : Nat) -> \(five : Nat) -> \(six : Nat) -> \(seven : Nat) -> \(eight : Nat) -> \(nine : Nat) -> \(ten : Nat) -> ( \(replicateM_ : Nat -> IO U -> IO U) -> \(ninetynine : Nat) -> replicateM_ ninetynine ((>>=) String U getLine putStrLn) ) -- replicateM_ ( \(n : Nat) -> \(io : IO U) -> foldNat n (IO U) ((>>) io) (return U Unit) ) -- ninetynine ((+) ((*) nine ten) nine) ) -- (>>) ( \(m : IO U) -> \(n : IO U) -> (>>=) U U m (\(_ : U) -> n) ) -- two ((+) one one) -- three ((+) one ((+) one one)) -- four ((+) one ((+) one ((+) one one))) -- five ((+) one ((+) one ((+) one ((+) one one)))) -- six ((+) one ((+) one ((+) one ((+) one ((+) one one))))) -- seven ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one)))))) -- eight ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one))))))) -- nine ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one)))))))) -- ten ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one ((+) one one))))))))) ) -- Nat ( forall (a : *) -> (a -> a) -> a -> a ) -- zero ( \(a : *) -> \(Succ : a -> a) -> \(Zero : a) -> Zero ) -- one ( \(a : *) -> \(Succ : a -> a) -> \(Zero : a) -> Succ Zero ) -- (+) ( \(m : forall (a : *) -> (a -> a) -> a -> a) -> \(n : forall (a : *) -> (a -> a) -> a -> a) -> \(a : *) -> \(Succ : a -> a) -> \(Zero : a) -> m a Succ (n a Succ Zero) ) -- (*) ( \(m : forall (a : *) -> (a -> a) -> a -> a) -> \(n : forall (a : *) -> (a -> a) -> a -> a) -> \(a : *) -> \(Succ : a -> a) -> \(Zero : a) -> m a (n a Succ) Zero ) -- foldNat ( \(n : forall (a : *) -> (a -> a) -> a -> a) -> n ) -- IO ( \(r : *) -> forall (x : *) -> (String -> x -> x) -> ((String -> x) -> x) -> (r -> x) -> x ) -- return ( \(a : *) -> \(va : a) -> \(x : *) -> \(PutStrLn : String -> x -> x) -> \(GetLine : (String -> x) -> x) -> \(Return : a -> x) -> Return va ) -- (>>=) ( \(a : *) -> \(b : *) -> \(m : forall (x : *) -> (String -> x -> x) -> ((String -> x) -> x) -> (a -> x) -> x ) -> \(f : a -> forall (x : *) -> (String -> x -> x) -> ((String -> x) -> x) -> (b -> x) -> x ) -> \(x : *) -> \(PutStrLn : String -> x -> x) -> \(GetLine : (String -> x) -> x) -> \(Return : b -> x) -> m x PutStrLn GetLine (\(va : a) -> f va x PutStrLn GetLine Return) ) -- putStrLn ( \(str : String) -> \(x : *) -> \(PutStrLn : String -> x -> x ) -> \(GetLine : (String -> x) -> x) -> \(Return : U -> x) -> PutStrLn str (Return Unit) ) -- getLine ( \(x : *) -> \(PutStrLn : String -> x -> x ) -> \(GetLine : (String -> x) -> x) -> \(Return : String -> x) -> GetLine Return ))This program will compile to a completely unrolled read-write loop, as most recursive programs will:$ morte < recursive.mt∀(String : *) → ∀(U : *) → U → ∀(x : *) → (String → x → x) → ((String → x) → x) → (U → x) → xλ(String : *) → λ(U : *) → λ(Unit : U) → λ(x : *) → λ(PutStrLn : String → x → x) → λ(GetLine : (String → x) → x) → λ(Return : U → x) → GetLine (λ(va : String) → PutStrLn va (GetLine (λ(va@1 : String) → PutStrLn va@1 (GetLine (λ(va@2 : String) → PutStrLn va@2 (GetLine (λ(va@3 : String) → PutStrLn ... ... GetLine (λ(va@92 : String) → PutStrLn va@92 (GetLine (λ(va@93 : String) → PutStrLn va@93 (GetLine (λ(va@94 : String) → PutStrLn va@94 (GetLine (λ(va@95 : String) → PutStrLn va@95 (GetLine (λ(va@96 : String) → PutStrLn va@96 (GetLine (λ(va@97 : String) → PutStrLn va@97 (GetLine (λ(va@98 : String) → PutStrLn va@98 (Return Unit))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))In contrast, if we encode the effects corecursively we can express a program that echoes indefinitely from stdin to stdout:-- corecursive.mt-- data IOF r s-- = PutStrLn String s-- | GetLine (String -> s)-- | Return r---- data IO r = forall s . MkIO s (s -> IOF r s)---- main = MkIO-- Nothing-- (maybe (\str -> PutStrLn str Nothing) (GetLine Just))( \(String : *)-> ( \(Maybe : * -> *) -> \(Just : forall (a : *) -> a -> Maybe a) -> \(Nothing : forall (a : *) -> Maybe a) -> \( maybe : forall (a : *) -> Maybe a -> forall (x : *) -> (a -> x) -> x -> x ) -> \(IOF : * -> * -> *) -> \( PutStrLn : forall (r : *) -> forall (s : *) -> String -> s -> IOF r s ) -> \( GetLine : forall (r : *) -> forall (s : *) -> (String -> s) -> IOF r s ) -> \( Return : forall (r : *) -> forall (s : *) -> r -> IOF r s ) -> ( \(IO : * -> *) -> \( MkIO : forall (r : *) -> forall (s : *) -> s -> (s -> IOF r s) -> IO r ) -> ( \(main : forall (r : *) -> IO r) -> main ) -- main ( \(r : *) -> MkIO r (Maybe String) (Nothing String) ( \(m : Maybe String) -> maybe String m (IOF r (Maybe String)) (\(str : String) -> PutStrLn r (Maybe String) str (Nothing String) ) (GetLine r (Maybe String) (Just String)) ) ) ) -- IO ( \(r : *) -> forall (x : *) -> (forall (s : *) -> s -> (s -> IOF r s) -> x) -> x ) -- MkIO ( \(r : *) -> \(s : *) -> \(seed : s) -> \(step : s -> IOF r s) -> \(x : *) -> \(k : forall (s : *) -> s -> (s -> IOF r s) -> x) -> k s seed step ) ) -- Maybe (\(a : *) -> forall (x : *) -> (a -> x) -> x -> x) -- Just ( \(a : *) -> \(va : a) -> \(x : *) -> \(Just : a -> x) -> \(Nothing : x) -> Just va ) -- Nothing ( \(a : *) -> \(x : *) -> \(Just : a -> x) -> \(Nothing : x) -> Nothing ) -- maybe ( \(a : *) -> \(m : forall (x : *) -> (a -> x) -> x-> x) -> m ) -- IOF ( \(r : *) -> \(s : *) -> forall (x : *) -> (String -> s -> x) -> ((String -> s) -> x) -> (r -> x) -> x ) -- PutStrLn ( \(r : *) -> \(s : *) -> \(str : String) -> \(vs : s) -> \(x : *) -> \(PutStrLn : String -> s -> x) -> \(GetLine : (String -> s) -> x) -> \(Return : r -> x) -> PutStrLn str vs ) -- GetLine ( \(r : *) -> \(s : *) -> \(k : String -> s) -> \(x : *) -> \(PutStrLn : String -> s -> x) -> \(GetLine : (String -> s) -> x) -> \(Return : r -> x) -> GetLine k ) -- Return ( \(r : *) -> \(s : *) -> \(vr : r) -> \(x : *) -> \(PutStrLn : String -> s -> x) -> \(GetLine : (String -> s) -> x) -> \(Return : r -> x) -> Return vr ))This compiles to a state machine that we can unfold one step at a time:$ morte < corecursive.mt∀(String : *) → ∀(r : *) → ∀(x : *) → (∀(s : *) → s → (s → ∀(x : *) → (String → s → x) → ((String → s) → x) → (r → x) → x) → x) → xλ(String : *) → λ(r : *) → λ(x : *) → λ(k : ∀(s : *) → s → (s → ∀(x : *) → (String → s → x) → ((String → s) → x) → (r → x) → x) → x) → k (∀(x : *) → (String → x) → x → x) (λ(x : *) → λ(Just : String → x) → λ(Nothing : x) → Nothing) (λ(m : ∀(x : *) → (String → x) → x → x) → m (∀(x : *) → (String → (∀(x : *) → (String → x) → x → x) → x) → ((String → ∀(x : *) → (String → x) → x → x) → x) → (r → x) → x) (λ(str : String) → λ(x : *) → λ(PutStrLn : String → (∀(x : *) → (String → x) → x → x) → x) → λ(GetLine : (String → ∀(x : *) → (String → x) → x → x) → x) → λ(Return : r → x) → PutStrLn str (λ(x : *) → λ(Just : String → x) → λ(Nothing : x) → Nothing)) (λ(x : *) → λ(PutStrLn : String → (∀(x : *) → (String → x) → x → x) → x) → λ(GetLine : (String → ∀(x : *) → (String → x) → x → x) → x) → λ(Return : r → x) → GetLine (λ(va : String) → λ(x : *) → λ(Just : String → x) → λ(Nothing : x) → Just va))I don't expect you to understand that output other than to know that we can translate the output to any backend that provides functions, and primitive read/write operations.ConclusionIf you would like to use Morte, you can find the library on both Github and Hackage. I also provide a Morte tutorial that you can use to learn more about the library.Morte is dependently typed in theory, but in practice I have not exercised this feature so I don't understand the implications of this. If this turns out to be a mistake then I will downgrade Morte to System Fw, which has higher-kinds and polymorphism, but no dependent types.Additionally, Morte might be usable to transmit code in a secure and typed way in distributed environment or to share code between diverse functional language by providing a common intermediate language. However, both of those scenarios require additional work, such as establishing a shared set of foreign primitives and creating Morte encoders/decoders for each target language.Also, there are additional optimizations which Morte might implement in the future. For example, Morte could use free theorems (equalities you deduce from the types) to simplify some code fragments even further, but Morte currently does not do this.My next goals are:Add a back-end to compile Morte to LLVMAdd a front-end to desugar a medium-level Haskell-like language to MorteOnce those steps are complete then Morte will be a usable intermediate language for writing super-optimizable programs.Also, if you're wondering, the name Morte is a tribute to a talking skull from the game Planescape: Torment, since the Morte library is a "bare-bones" calculus of constructions.LiteratureIf this topic interests you more, you may find the following links helpful, in roughly increasing order of difficulty:Data and Codata - A blog post by Dan Piponi introducing the notions of data and codataChurch encoding - A wikipedia article on church encoding (converting things to lambda expressions)Total Functional Programming - A paper by D. A. Turner on total programming using data and codataRecursive types for free! - An unpublished draft by Philip Wadler about F-algebras and F-coalgebrasUnderstanding F algebras - A blog post by Bartosz MilewskiBeyond Church encoding: Boehm-Berarducci isomorphism of algebraic data types and polymorphic lambda-terms - Oleg Kiselyov's collection of notes on Boehm-Berarducci encoding, a more complete version of Church encodingF-algebras and F-coalgebras - wikipedia articles that are short, sweet, and utterly impenetrable unless you already understand the subjectGabriel Gonzaleztag:blogger.com,1999:blog-1777990983847811806.post-1179905794869657936Fri, 12 Sep 2014 11:00:00 +0000[prbwmqwj] Functions to modify a record
http://kenta.blogspot.com/2014/09/prbwmqwj-functions-to-modify-record.html
Haskell could use some new syntax LAMBDA_RECORD_MODIFY which could be used as follows: import qualified Control.Monad.State as State; data Record { field :: Int }; ... State.modify $ LAMBDA_RECORD_MODIFY { field = ... }; which is equivalent to State.modify $ \x -> x { field = ... } but not having to name the lambda parameter "x" (twice). I suspect this is one of the things lenses are trying to do.Kentag:blogger.com,1999:blog-6757805.post-4968562781478659811Fri, 12 Sep 2014 03:36:00 +0000Static pointers and serialisation
http://ghc.haskell.org/trac/ghc/blog/simonpj/StaticPointers
simonpjhttp://ghc.haskell.org/trac/ghc/blog/simonpj/StaticPointersThu, 11 Sep 2014 13:34:23 +0000Clarification of previous blog post
http://www.yesodweb.com/blog/2014/09/clarification-previous-blog-post
I've heard that my previous blog
post has
caused a bit of confusion, as sarcasm doesn't really come across in text very
well. So let me elaborate (and of course, in the process, kill the joke):Some years back, Erik found a case that was quite difficult to implement using
enumerator. After we cracked our heads on it for long enough, some of us (I
don't actually remember who was involved) decided to work on a new streaming
library. That library ended up being called conduit (thanks to Yitz for the
naming idea). It turns out that most people are unaware of that history, so
when at ICFP, I casually mentioned that Erik was the cause of conduit coming
into existence, some people were surprised. Erik jokingly chastised me for
not giving him enough credit. In response, I decided to write an over-the-top
post giving Erik all credit for conduit. I say over the top, since I made
it seem like there was some large amount of blame being heaped on as well.So to be completely clear:Erik and I are good friends, and this was just a bit of an inside joke turned public.No one has said anything offensive to me at all about conduit. There are obviously differing opinions out there about the best library for a job, but there's nothing offensive about it, just healthy discussion around a complicated topic. My purpose in making a big deal about it was not to express frustration at anyone attacking me, but rather to just play up the joke a bit more.My apologies to anyone who was confused, upset, or worried by the previous
post, it was completely unintentional.http://www.yesodweb.com/blog/2014/09/clarification-previous-blog-postThu, 11 Sep 2014 00:00:00 +0000We're hiring: Haskell web UI developer
https://www.fpcomplete.com/blog/2014/09/hiring-haskell-web-ui-dev
FP Complete is looking to expand its Haskell development team. We’re looking
for a Haskeller with a strong background in web UI development. This position
will encompass both work on our core products- such as FP Haskell Center and
School of Haskell- as well as helping customers develop frontends to their
Haskell applications.We will want you to start right away. The will be a contractor position, full
time for at least 3 months, with the intention to continue long-term on a more
or less full-time basis. Additionally, while the main focus of the position
will be UI development, there will be many opportunities to expand into other
areas of focus. This is a telecommute position: you can work from home or wherever you choose,
with little or no travel. Location in North America is ideal; you will work
with colleagues who are on North American and European hours.Skills required:Strong Haskell coding skills.Experience with creating HTML/CSS/Javascript web applications (fat clients a plus).Ideally: experience with both Yesod and Fay for server and client side coding, respectively. (Perk: you’ll get a chance to work with the authors of both tools.)Experience deploying applications into production, especially at large scale, is a plus.Ability to interact with a distributed development team, and to manage your time without an in-person supervisorAbility to work with clients on gathering requirementsGeneral source control/project skills: Git, issue trackingAbility to communicate clearly in issues, bug reports and emailsProficient on a Linux systemPlus: experience with deployment, Docker, and/or CoreOSPlease send resume or CV to
michael@fpcomplete.com. Any existing work-
either a running website or an open source codebase- which you can include
links to will be greatly appreciated as well.https://www.fpcomplete.com/blog/2014/09/hiring-haskell-web-ui-devThu, 11 Sep 2014 00:00:00 +0000Polymorphism in Haskell vs C++
http://izbicki.me/blog/polymorphism-in-haskell-vs-c%2B%2B.html
http://izbicki.me/blog/polymorphism-in-haskell-vs-c%2B%2B.htmlWed, 10 Sep 2014 00:00:00 +0000Fun with (Extended Kalman) Filters
http://idontgetoutmuch.wordpress.com/2014/09/09/fun-with-extended-kalman-filters-4/
Summary
An extended Kalman filter in Haskell using type level literals and automatic differentiation to provide some guarantees of correctness.
Population Growth
Suppose we wish to model population growth of bees via the logistic equation
We assume the growth rate is unknown and drawn from a normal distribution but the carrying capacity is known and we wish to estimate the growth rate by observing noisy values of the population at discrete times . Note that is entirely deterministic and its stochasticity is only as a result of the fact that the unknown parameter of the logistic equation is sampled from a normal distribution (we could for example be observing different colonies of bees and we know from the literature that bee populations obey the logistic equation and each colony will have different growth rates).
Haskell Preamble
> {-# OPTIONS_GHC -Wall #-}
> {-# OPTIONS_GHC -fno-warn-name-shadowing #-}
> {-# OPTIONS_GHC -fno-warn-type-defaults #-}
> {-# OPTIONS_GHC -fno-warn-unused-do-bind #-}
> {-# OPTIONS_GHC -fno-warn-missing-methods #-}
> {-# OPTIONS_GHC -fno-warn-orphans #-}
> {-# LANGUAGE DataKinds #-}
> {-# LANGUAGE ScopedTypeVariables #-}
> {-# LANGUAGE RankNTypes #-}
> {-# LANGUAGE BangPatterns #-}
> {-# LANGUAGE TypeOperators #-}
> {-# LANGUAGE TypeFamilies #-}
> module FunWithKalman3 where
> import GHC.TypeLits
> import Numeric.LinearAlgebra.Static
> import Data.Maybe ( fromJust )
> import Numeric.AD
> import Data.Random.Source.PureMT
> import Data.Random
> import Control.Monad.State
> import qualified Control.Monad.Writer as W
> import Control.Monad.Loops
Logistic Equation
The logistic equation is a well known example of a dynamical system which has an analytic solution
Here it is in Haskell
> logit :: Floating a => a -> a -> a -> a
> logit p0 k x = k * p0 * (exp x) / (k + p0 * (exp x - 1))
We observe a noisy value of population at regular time intervals (where is the time interval)
Using the semi-group property of our dynamical system, we can re-write this as
To convince yourself that this re-formulation is correct, think of the population as starting at ; after 1 time step it has reached and and after two time steps it has reached and this ought to be the same as the point reached after 1 time step starting at , for example
> oneStepFrom0, twoStepsFrom0, oneStepFrom1 :: Double
> oneStepFrom0 = logit 0.1 1.0 (1 * 0.1)
> twoStepsFrom0 = logit 0.1 1.0 (1 * 0.2)
> oneStepFrom1 = logit oneStepFrom0 1.0 (1 * 0.1)
ghci> twoStepsFrom0
0.11949463171139338
ghci> oneStepFrom1
0.1194946317113934
We would like to infer the growth rate not just be able to predict the population so we need to add another variable to our model.
Extended Kalman
This is almost in the form suitable for estimation using a Kalman filter but the dependency of the state on the previous state is non-linear. We can modify the Kalman filter to create the extended Kalman filter (EKF) by making a linear approximation.
Since the measurement update is trivially linear (even in this more general form), the measurement update step remains unchanged.
By Taylor we have
where is the Jacobian of evaluated at (for the exposition of the extended filter we take to be vector valued hence the use of a bold font). We take to be normally distributed with mean of 0 and ignore any difficulties there may be with using Taylor with stochastic variables.
Applying this at we have
Using the same reasoning as we did as for Kalman filters and writing for we obtain
Haskell Implementation
Note that we pass in the Jacobian of the update function as a function itself in the case of the extended Kalman filter rather than the matrix representing the linear function as we do in the case of the classical Kalman filter.
> k, p0 :: Floating a => a
> k = 1.0
> p0 = 0.1 * k
> r, deltaT :: Floating a => a
> r = 10.0
> deltaT = 0.0005
Relating ad and hmatrix is somewhat unpleasant but this can probably be ameliorated by defining a suitable datatype.
> a :: R 2 -> R 2
> a rpPrev = rNew # pNew
> where
> (r, pPrev) = headTail rpPrev
> rNew :: R 1
> rNew = konst r
>
> (p, _) = headTail pPrev
> pNew :: R 1
> pNew = fromList $ [logit p k (r * deltaT)]
> bigA :: R 2 -> Sq 2
> bigA rp = fromList $ concat $ j [r, p]
> where
> (r, ps) = headTail rp
> (p, _) = headTail ps
> j = jacobian (\[r, p] -> [r, logit p k (r * deltaT)])
For some reason, hmatrix with static guarantees does not yet provide an inverse function for matrices.
> inv :: (KnownNat n, (1 <=? n) ~ 'True) => Sq n -> Sq n
> inv m = fromJust $ linSolve m eye
Here is the extended Kalman filter itself. The type signatures on the expressions inside the function are not necessary but did help the implementor discover a bug in the mathematical derivation and will hopefully help the reader.
> outer :: forall m n . (KnownNat m, KnownNat n,
> (1 <=? n) ~ 'True, (1 <=? m) ~ 'True) =>
> R n -> Sq n ->
> L m n -> Sq m ->
> (R n -> R n) -> (R n -> Sq n) -> Sq n ->
> [R m] ->
> [(R n, Sq n)]
> outer muPrior sigmaPrior bigH bigSigmaY
> littleA bigABuilder bigSigmaX ys = result
> where
> result = scanl update (muPrior, sigmaPrior) ys
>
> update :: (R n, Sq n) -> R m -> (R n, Sq n)
> update (xHatFlat, bigSigmaHatFlat) y =
> (xHatFlatNew, bigSigmaHatFlatNew)
> where
>
> v :: R m
> v = y - (bigH #> xHatFlat)
>
> bigS :: Sq m
> bigS = bigH <> bigSigmaHatFlat <> (tr bigH) + bigSigmaY
>
> bigK :: L n m
> bigK = bigSigmaHatFlat <> (tr bigH) <> (inv bigS)
>
> xHat :: R n
> xHat = xHatFlat + bigK #> v
>
> bigSigmaHat :: Sq n
> bigSigmaHat = bigSigmaHatFlat - bigK <> bigS <> (tr bigK)
>
> bigA :: Sq n
> bigA = bigABuilder xHat
>
> xHatFlatNew :: R n
> xHatFlatNew = littleA xHat
>
> bigSigmaHatFlatNew :: Sq n
> bigSigmaHatFlatNew = bigA <> bigSigmaHat <> (tr bigA) + bigSigmaX
Now let us create some sample data.
> obsVariance :: Double
> obsVariance = 1e-2
> bigSigmaY :: Sq 1
> bigSigmaY = fromList [obsVariance]
> nObs :: Int
> nObs = 300
> singleSample :: Double -> RVarT (W.Writer [Double]) Double
> singleSample p0 = do
> epsilon <- rvarT (Normal 0.0 obsVariance)
> let p1 = logit p0 k (r * deltaT)
> lift $ W.tell [p1 + epsilon]
> return p1
> streamSample :: RVarT (W.Writer [Double]) Double
> streamSample = iterateM_ singleSample p0
> samples :: [Double]
> samples = take nObs $ snd $
> W.runWriter (evalStateT (sample streamSample) (pureMT 3))
We created our data with a growth rate of
ghci> r
10.0
but let us pretend that we have read the literature on growth rates of bee colonies and we have some big doubts about growth rates but we are almost certain about the size of the colony at .
> muPrior :: R 2
> muPrior = fromList [5.0, 0.1]
>
> sigmaPrior :: Sq 2
> sigmaPrior = fromList [ 1e2, 0.0
> , 0.0, 1e-10
> ]
We only observe the population and not the rate itself.
> bigH :: L 1 2
> bigH = fromList [0.0, 1.0]
Strictly speaking this should be 0 but this is close enough.
> bigSigmaX :: Sq 2
> bigSigmaX = fromList [ 1e-10, 0.0
> , 0.0, 1e-10
> ]
Now we can run our filter and watch it switch away from our prior belief as it accumulates more and more evidence.
> test :: [(R 2, Sq 2)]
> test = outer muPrior sigmaPrior bigH bigSigmaY
> a bigA bigSigmaX (map (fromList . return) samples)Dominic Steinitzhttp://idontgetoutmuch.wordpress.com/?p=752Tue, 09 Sep 2014 08:28:12 +0000Misassigned credit for conduit
http://www.yesodweb.com/blog/2014/09/misassigned-credit-for-conduit
When I was at ICFP last week, it became clear that I had made a huge mistake in
the past three years. A few of us were talking, including Erik de Castro Lopo,
and when I mentioned that he was the original inspiration for creating the
conduit package, everyone else was surprised. So firstly: Erik, I apologize for
not making it clear that you initially kicked off development by finding some
fun corner cases in enumerator that were difficult to debug.So to rectify that, I think it's only fair that I write the following:conduit is entirely Erik's fault.If you love conduit, write Erik a thank you email.More importantly, if you hate conduit, there's no need to complain to me anymore. Erik presumably will be quite happy to receive all such further communications.In other words, it's not my company, I just work here.Thanks Erik :)UPDATE Please also read my follow-up blog
post clarifying this one, just
in case you're confused.http://www.yesodweb.com/blog/2014/09/misassigned-credit-for-conduitTue, 09 Sep 2014 00:00:00 +0000Haskell Implementors Workshop 2014 videos available!
http://ghc.haskell.org/trac/ghc/blog/HIW2014
hvrhttp://ghc.haskell.org/trac/ghc/blog/HIW2014Mon, 08 Sep 2014 15:55:58 +0000Promoting functions to type families in Haskell
http://lambda.jstolarek.com/2014/09/promoting-functions-to-type-families-in-haskell/
It’s been very quiet on the blog these past few months not because I’m spending less time on functional programming but precisely for the opposite reason. Since January I’ve been working together with Richard Eisenberg to extend his singletons library. This work was finished in June and last Friday I gave a talk about our research on Haskell Symposium 2014. This was the first time I’ve been to the ICFP and Haskell Symposium. It was pretty cool to finally meet all these people I know only from IRC. I also admit that the atmosphere of the conference quite surprised me as it often felt like some sort of fan convention rather than the biggest event in the field of functional programming.
The paper Richard and I published is titled “Promoting Functions to Type Families in Haskell”. This work is based on Richard’s earlier paper “Dependently typed programming with singletons” presented two years ago on Haskell Symposium. Back then Richard presented the singletons library that uses Template Haskell to generate singleton types and functions that operate on them. Singleton types are types that have only one value (aside from bottom) which allows to reason about runtime values during compilation (some introduction to singletons can be found in this post on Richard’s blog). This smart encoding allows to simulate some of the features of dependent types in Haskell. In our current work we extended promotion capabilities of the library. Promotion is only concerned with generating type-level definitions from term-level ones. Type-level language in GHC has become quite expressive during the last couple of years but it is still missing many features available in the term-level language. Richard and I have found ways to encode almost all of these missing features using the already existing type-level language features. What this means is that you can write normal term-level definition and then our library will automatically generate an equivalent type family. You’re only forbidden from using infinite terms, the do-notation, and decomposing String literals to Chars. Numeric literals are also very problematic and the support is very limited but some of the issues can be worked around. What is really cool is that our library allows you to have partial application at the type level, which GHC normally prohibits.
You can learn more by watching my talk on YouTube, reading the paper or the singletons documentation. Here I’d like to add a few more information that are not present in the paper. So first of all the paper was concerned only with promotion and didn’t say anything about singletonization. But as we enabled more and more language constructs to be promoted we also made them singletonizable. So almost everything that can be promoted can also be singletonized. The most notable exception to this rule are type classes, which are not yet implemented at the moment.
An interesting issue was raised by Adam Gundry in a question after the talk: what about difference between lazy term-level semantics and strict type-level semantics? You can listen to my answer in the video but I’ll elaborate some more on this here. At one point during our work we were wondering about this issue and decided to demonstrate an example of an algorithm that crucially relies on laziness to work, ie. fails to work with strict semantics. I think it’s not straightforward to come up with such an algorithm but luckily I recalled the backwards state monad from Philip Wadler’s paper “The essence of functional programming”1. Bind operator of that monad looks like this (definition copied from the paper):
m `bindS` k = \s2 -> let (a,s0) = m s1
(b,s1) = k a s2
in (b,s0)
The tricky part here is that the output of call to m becomes input to call to k, while the output of call to k becomes the input of m. Implementing this in a strict language does not at all look straightforward. So I promoted that definition expecting it to fail spectacularly but to my surprised it worked perfectly fine. After some investigation I understood what’s going on. Type-level computations performed by GHC are about constraint solving. It turns out that GHC is able to figure out in which order to solve these constraints and get the result. It’s exactly analogous to what happens with the term-level version at runtime: we have an order of dependencies between the closures and there is a way in which we can run these closures to get the final result.
All of this work is a small part of a larger endeavour to push Haskell’s type system towards dependent types. With singletons you can write type-level functions easily by writing their definitions using the term-level language and then promoting these definitions. And then you can singletonize your functions to work on singleton types. There were two other talks about dependent types during the conference: Stephanie Weirich’s “Depending on Types” keynote lecture during ICPF and Richard’s “Dependent Haskell” talk during Haskell Implementators Workshop. I encourage everyone interested in Haskell’s type system to watch both of these talks.
The awful truth is that this monad does not really work with the released version of singletons. I only realized that when I was writing this post. See issue #94 on singletons bug tracker.Jan Stolarekhttp://lambda.jstolarek.com/?p=1551Mon, 08 Sep 2014 11:02:53 +0000Shake in the wild
http://neilmitchell.blogspot.com/2014/09/shake-in-wild.html
Neil Mitchelltag:blogger.com,1999:blog-7094652.post-6273210663500888526Sun, 07 Sep 2014 21:02:00 +0000Haskell Implementor’s Workshop ’14
http://feedproxy.google.com/~r/ezyang/~3/XdBqn3UxVRI/
This year at ICFP, we had some blockbuster attendance to the Haskell Implementor's Workshop (at times, it was standing room only). I had the pleasure of presenting the work I had done over the summer on Backpack.
You can grab the slides or view the presentation itself (thank you ICFP organizers for being incredibly on-the-ball with videos this year!) The talk intersects a little bit with my blog post A taste of Cabalized Backpack, but there are more pictures, and I also emphasize (perhaps a little too much) the long term direction we are headed in.
There were a lot of really nice talks at HiW. Here are some of my personal highlights:
Richard Eisenberg: Dependent Haskell
Alejandro Serrano: Interactive features in ghc-mod (lightning talk) (Heavily based off of Idris-mode)
Lennart Augustsson: Better type-error messages (lightning talk) (Why isn’t this in GHC? The big problem is that the data type representing types is the same as the internal data type for types in Core. Tracking locations should be strictly at the surface syntax, so we’d have to refactor the frontend first.)
Michael Adams: Extending "Optimize your SYB" (lightning talk) (In the original paper, they suggest speedups can be gained by aggressively evaluating expressions of type TypeRep, TyCon, Data and Typeable at compile time. Michael was wondering if there were any other types which should receive similar treatment. One audience-member suggested Int (i.e. to get rid of boxing), but I don’t find that very convincing.)Edward Z. Yanghttp://blog.ezyang.com/?p=9251Sun, 07 Sep 2014 13:05:02 +0000