Monday, September 18, 2017

 

Smullyan - Martian/Venusian Men & Women

In "Forever Undecided", Raymond Smullyan proposes a place occupied by Martian men & women, and Venusian women & men. The Martian men always tell the truth, the Martian women always lie, Venusian women always tell the truth, and Venusian men always lie. This is one of his puzzles:

The next [Martian/Venusian man/woman] who walked by made a statement from which George and Oona could deduce that [they] must be a Venusian female. What statement could it have been?

I was surprised that my answer was not the same as Smullyan's -- or rather, it didn't appear to be at first. Smullyan suggested the statement "I am either female or Venusian". The reasoning is that a Martian man could not make this statement because it would be false in that case and Martian men always tell the truth. A Martian woman couldn't make the statement, because they always lie, and the statement would be true for her since she is a female. Likewise, a Venusian man couldn't make the statement because it is true that he is Venusian, and he too must always lie. That leaves the Venusian woman as the only person who can make the statement (either-or can mean both).

My answer was "I am not a Martian Male", which is also correct. A Martian man can't say that because he would be lying. Martian women and Venusian men can't make the statement because it would be true for them and they have to lie, so again that leaves the Venusian woman as the only person who can make the statement.

Now, I said that my answer didn't appear to be the same as Smullyan's at first. I said that because I realized that they are the same by DeMorgan's laws. These laws basically show the relationship between AND, OR, and NOT. That is:
(NOT a) AND (NOT b) is the same as NOT (a OR b)
(NOT a) OR (NOT b) is the same as NOT (a AND b)

If I use M to represent Martian and m to represent male, then my statement, "I am not a Martian male" is simply: NOT (M AND m). If you think of Venusian as "not Martian", and female as "not male", then Smullyan's statement, "I am either Venusian or female", is (NOT M) OR (NOT m), which by DeMorgan's laws is the same as NOT (M and m).

Friday, September 15, 2017

 

Satan, Cantor, and Infinity p. 16

I am reading Raymond Smullyan's "Satan, Cantor, and Infinity", and on page 16 I find myself in some disagreement with the solution to the puzzle, although I agree with the final result. Here is the puzzle as stated (bear in mind that this story takes place on an island where there are only Knights, who always tell the truth, and Knaves who always lie):

The case concerns a stolen horse. There are four suspect -- Andrew, Bruce, Clayton, and Edward. The authorities know for sure that one and only one of these four is the thief. The first three have already been found and put in custody, but Edward cannot be found anywhere. The trial will have to proceed without him.

First, the judge pounded his gavel and asked a highly relevant question: "Who stole the horse?" He got the following replies:
Andrew: Bruce stole the horse.
Bruce: Clayton stole the horse.
Clayton: It was Edward who stole the horse.

Then, quite unexpectedly, one of the three defendants said, "The other two are lying."

The judge thought for a bit, then pointed to one of the three and said, "Obviously, you didn't steal the horse, so you may leave the court."

The acquitted man was happy to comply, and so only two defendants were left on trial.

The judge then asked one of the remaining two whether the other was a knight, and, after receiving an answer (yes or no), knew who stole the horse. What did the judge decide?

Now, here is the answer from the book:

First we must determine whom the judge immediately acquitted. Suppose it was Andrew. If Andrew is a knight, then Bruce must be guilty and Andrew is innocent. If Andrew is a knave, then it is false that Bruce and Clayton both lied; at least one of the told the truth. This means that either Clayton is guilty (if Bruce told the truth), or Edward is guilty (if Clayton told the truth); in either case, Andrew would be innocent. And so if it was Andrew who made the second statement, he is innocent, regardless of whether he is a knight or a knave. The judge, of course, would have realized this and acquitted him.

However, if either Bruce or Clayton made the second statement, the judge could not have found grounds to acquit anyone. If Bruce spoke, the judge could only tell that either Bruce is a knight and Clayton is guilty, or Bruce is a knave and either Bruce or Edward is guilty. If it was Clayton who spoke, then the judge could only tell that either Clayton is a knight and Edward is guilty, or Clayton is a knave and either Bruce or Clayton is guilty. Since the judge
did make an acquittal, it must have been Andrew who spoke and was acquitted.

Thus the remaining defendants were Bruce and Clayton. One of these two, by answering the judge's last question, either claimed that the other defendant was a knight, or that the other was a knave. If the former, then the two defendants are the same type (both knights or both knaves); if the latter then the two are different types. Suppose the latter. Then Clayton may be a knight and Bruce a knave, in which case Edward is guilty (because Clayton said he was), or Bruce may be a knight and Clayton a knave, in which case Clayton is guilty. However, the judge couldn't have known which, and hence couldn't make a conviction. Therefore, one of the two must have claimed that the other was a knight (by answering
yes to the judge's question). The judge then knew that they are of the same type. They can't both be knights (since their accusations conflict), so they are both knaves and their accusations are both false. Neither Clayton nor Edward stole the horse. Andrew, as we know, has already been acquitted. So it was Bruce who stole the horse.

The main issue I have with this analysis is that I think it is irrelevant who said "The other two are lying", and that the judge could conclude that Andrew is innocent regardless of who said it (it doesn't specify that he released the one who said it, just that he released "one of the three"). I also think that you don't have to use the process of elimination to conclude that Bruce is guilty.

Once the three have testified, we can quickly conclude that at most only one is telling the truth, since all three statements contradict one another. That leaves either one truth-teller, or all three are lying. Now, when someone states "The other two are lying", that eliminates the possibility that all three are lying. The case that the statement is true should be obvious. If the person saying it is lying, then the statement means that is is not true that both the others are lying, so at least one is telling the truth. Since there can't be two truth-tellers because of the conflicting statements, then the statement means that exactly one person is telling the truth. Given that one of three original statements is true, we know that it must be either Bruce, Clayton, or Edward who is guilty, so Andrew cannot be guilty. This conclusion does not require that Andrew said "The other two are lying".

In looking at Smullyan's analysis of the statement, he says that "If Bruce spoke, the judge could only tell that either Bruce is a knight and Clayton is guilty, or Bruce is a knave and either Bruce or Edward is guilty. If it was Clayton who spoke, then the judge could only tell that either Clayton is a knight and Edward is guilty, or Clayton is a knave and either Bruce or Clayton is guilty. Since the judge did make an acquittal, it must have been Andrew who spoke and was acquitted." I think he missed the fact that if either Bruce or Clayton spoke, Andrew is still exonerated. If Bruce spoke, either Clayton, Bruce, or Edward are guilty. If Clayton spoke, either Clayton, Bruce, or Edward are guilty. In neither case is Andrew guilty, so it does not follow that Andrew must have been the one to speak when the judge rightly concluded that Andrew was innocent.

My other point about Smullyan's conclusion is just that you don't have to use the process of elimination to settle on Bruce. Since we know that only one person is telling the truth, we have three possibilities left:
  • Bruce is a knight, Andrew and Clayton are knaves, Clayton is guilty
  • Clayton is a knight, Andrew and Bruce are knaves, Edward is guilty
  • Andrew is a knight, Bruce and Clayton are knaves, Bruce is guilty


  • As Smullyan explains above, the judge must have received a "yes" in reply to the second question indicating that both Bruce and Clayton are knaves. You can now directly conclude that Bruce is guilty because if Bruce and Clayton are both knaves, then the knight -- the lone truth-teller -- must be Andrew, who said that Bruce is guilty.

    Wednesday, August 12, 2015

     

    Some reflections on my coding style after 10 years of ICFP Programming Contests

    The first ICFP Programming Contest that I participated in was the 2005 Cops & Robbers problem, organized by Robby Findler and the PLT group. I was just learning OCaml at the time and used it for my solution. Prior to OCaml, the only other functional languages I had used were Lisp and Scheme.

    Although I used Haskell in 2015, I used OCaml for the 2014 contest, so I find it useful to compare my code between 2005 and 2014.

    One of the things I notice in my code from that first contest is how much it reflects my background of stateful, imperative languages. For example, most of the data types I defined were full of records with mutable fields, like this:
    type player_rec = {
      mutable player_name : string;
      mutable ptype : player_type;
      mutable ploc : int;
    };;
    

    I didn't understand at the time that it is easy to copy records where you just modify the fields you want to change.

    Another symptom of my stateful thinking is that I used refs and List.iter a lot when there were better ways to do it. For example, here's how I computed the distance to the nearest cop in the Cops & Robbers game:
    let distance_from_nearest_cop loc =
      let distance = ref !max_distance in
      let get_cop_distance cop =
        let dist = ((cop_distance_selector cop) (!the_world.nodes.(cop.ploc))).(loc) in
          ((Printf.eprintf "Distance from cop %s is %d\n" cop.player_name dist);
          if dist < !distance then
        distance := dist)
      in
        List.iter get_cop_distance (get_cops());
        !distance;;
    


    A better option here might be to define a function to return the minimum value in a list, and then map a get_distance function over all the cops. Something like this:
    let minimum l = fold_left min (hd l) (tl l)
    
    let distance_from_nearest_cop loc cops =
      minimum (map (get_distance loc) cops)
    


    In 2014, the contest involved writing code to run on a virtual machine based on the SECD Machine. I wrote a compiler in OCaml to convert a simple form of Lisp to the instructions for the GCC (General Compute Coprocessor - the SECD-like machine). The compiler generates an abstract syntax tree (AST) and then generates code from the tree. Here is a snippet of the definitions for the AST:
    type defs = def list
    and
    def =
        | Defun of string * (string list) * (statement list)
        | Defun_Tail of string * (string list) * (statement list)
        | Defconst of string * int
    and
    statement =
        | Let of (let_env list) * (statement list)
        | If of (statement * statement * statement)
        | Begin of statement list
        | Function_Call of string * (statement list)
        | Tail_Function_Call of string * (statement list)
        | Cons of statement * statement
        | Car of statement
    


    When processing the AST, I need to keep track of a lot of state information for the code generator. This is the state type:
    type state = {
       current_function_code : string list;
       environment_stack : string list list;
       function_list : (string*(string list)) list;
       next_temp_number : int;
       symbol_table : (string * symbol_table_entry) list;
       pc : int;
       is_tail_recursive : bool;
       top_level_env : string list
    }
    


    Notice that there aren't any mutable fields any more. Instead, I pass the state to various functions and they return a modified version of the state. For example, this function is how I add new instructions to the current function code I am generating:
    let add_code_lines state code_lines =
        { state with current_function_code = (List.append state.current_function_code code_lines) }
    

    And as an example, here is how I generate the instructions for a NOT operation:
    let generate_not state =
        add_code_lines state [ "LDC 0"; "CEQ" ]
    

    If you're curious, the "LDC 0" loads the constant 0 onto the data stack, and CEQ does an equality comparison.

    Where this way of processing state looks a little ugly is when I chain a lot of operations together. For example:
    generate_function state fn env_stack env statements return_type =
        let curr_code = state.current_function_code in
        let state = { state with current_function_code=[]; environment_stack=env::env_stack} in
        let state = add_code_line state (";FN="^fn) in
        let state = generate_statements state statements in
        let state = add_code_line state return_type in
        let state = { state with function_list = (fn, state.current_function_code)::state.function_list} in
        let state = add_symbol state fn (FunctionSymbol (-1)) in
        { state with current_function_code = curr_code; environment_stack=List.tl state.environment_stack }
    

    A nicer way to string these calls together is to define the functions to take state as the final parameter. Here are some simple examples:
    let fun1 tn state = { state with next_temp_number = tn }
    let fun2 code state = {
      state with current_function_code = List.append state.current_function_code code }
    let fun3 newpc istr state = {
      state with is_tail_recursive = istr; pc = newpc }
    

    Then, you can use the pipe-forward or reverse-application operator (|>), which I like to call the "chicken operator", to chain the calls together, like this:
    let process_something state =
      state |> fun1 4 |> fun2 ["LDC 0"; "CEQ"] |> fun3 123 false
    

    Because state is the last parameter, I just omit it from the calls to the other functions and they are treated as partially applied functions where the |> operator will supply the final state parameter. This operator wasn't in OCaml in 2005, but I could have just defined something like it (I think).

    Thursday, May 22, 2008

     

    Adding 1 to a binary number in sed

    I have an odd fascination with binary clocks. I just recently built my own, based on an AVR microcontroller, which I will post more about soon. I also have a binary watch that was a present from Ceal a few years ago. During a particular boring meeting yesterday, I was watching it count the seconds, and I realized that to increment a binary number by 1, you are really looking for the pattern: 01*$ (the last part of the number consisting of 0 followed by only 1s to the end of the line). When you find that pattern, you just invert the bits.

    For example, to add 1 to 10011 (19 decimal), you are only interested in the 011 at the end. You invert that part and get 10100 (20 decimal). The only time the 01*$ pattern doesn't work is when the number is all 1s. In that case, you can insert a leading 0 and again just invert the bits. For example, 1111 (15 decimal) first becomes 01111, then inverted becomes 10000 (16 decimal).

    So, after that little epiphany, I began to wonder if I could write a sed script that functioned as a binary adder. At first I didn't think it could be done, because I was thinking purely in terms of doing substitutions. I have since learned much more about the capabilities of sed and found it was possible. Here is the script, along with an explanation of what each line does:


    s/^1*$/0&/
    h
    s/\(.*\)01*$/\1/
    x
    s/^.*\(01*\)$/\1/
    y/01/10/
    x
    G
    s/\n//



    s/^1*$/0&/If the number is all 1s, insert a leading 0
    hCopy the current line to the hold buffer
    s/\(.*\)01*$/\1/Strip off the 01*$ pattern from the end
    xExchange the current pattern with the hold buffer
    s/^.*\(01*\)$/\1/Strip off everything but the 01*$ pattern
    y/01/10/Invert the bits in the current line
    xExchange the current pattern with the hold buffer (go back to the prefix bits)
    GJoin the hold buffer (the end bits that were just inverted) to the current pattern (sed puts it on a separate line)
    s/\n//Join the two lines


    To test this, I wrote a little script that starts with 0, runs it through the adder, prints the result, and the runs the result through the adder. This is the script (caution, it loops forever):


    #!/bin/sh
    NUM=0
    while true; do
    NUM=`echo $NUM | sed -f adder.sed`
    echo $NUM
    done


    And here is a snippet of the results:

    1
    10
    11
    100
    101
    110
    111
    1000
    1001
    1010
    1011
    1100
    1101
    1110
    1111
    10000
    10001
    10010
    10011
    10100

    Thursday, March 09, 2006

     

    Beyond XML, part 2

    I have come to the realization that the thing I was trying to describe in my previous post is fairly close to ASN.1 (Abstract Syntax Notation 1). The thing about ASN.1 that I never really grasped was that you can have different encoding rules. Thus, you can have an encoding that encodes as XML, or another one that has a more efficient binary protocol. I came to this realization while reading about Sun's Fast Infoset for SOAP requests, during my desperate search for a production-ready Java SOAP implementation that was reasonably fast. Still looking, though..

    Sunday, August 14, 2005

     

    Beyond XML

    Most people I have worked with will tell you that I am not a fan of XML. It has its places, but people seem to try to use it everywhere they can just because it is popular. I was reflecting this morning on XML Schemas and how ugly they are. It is difficult to look at a schema and visualize the data layout.

    At first, I wondered why this is so, and why you can look at a data definition in other languages and get a good idea how everything fits. Some of this is probably the noise of XML -- all those tags just give your brain more information to process. I think another part of it, though, is that XML is so explicit about things. In other languages, you can define things more positionally. That is, you can define a variable declaration as "type varname". In XML, you can do the same thing in your schema, but you still need tags for the type and varname.

    As I thought about this, it occurred to me that maybe all we need is a really super schema language. One where we define mappings to and from an abstract data notation. Instead of just mapping to/from XML, it would be great if you could map to/from all kinds of formats -- property files, flat files, comma-delimited, etc. You could couple this with some kind of interface definition language that expresses an interface in terms of the abstract data notation and then you could have something like SOAP but with support for multiple syntactic forms.

    For example, suppose I define a remote-call interface in an abstract notation that looks something like this:
    remote_call_request = string call_name, any params[];
    remote_call_response = any result;

    Then I could define multiple syntaxes for the remote call. One might looks like a SOAP request, one might be pipe-delimited text. It seems like the syntax definition might be pretty complicated to write, but it also seems like it could solve some headaches. It would be more friendly to legacy systems, you could get closer to a binary XML kind of format, you could convert easily from one format to another (like you do with XSLT, but with better support for non-XML formats).

    I don't know if anyone is working on something like this, or if it is even possible, but it is interesting to think about.

    Thursday, July 14, 2005

     

    Lispy or not?

    I just started making Lisp versions my cop and robber entries from the ICFP contest. I am aware that because other languages I use don't have anything like Lisp's macros, I tend not to use macros in Lisp. I am trying to change that.

    When I was coding my parser in ocaml, I thought it was cool to use the match statement to parse the various lines, like this:

    let parse_line line =
    match Str.split (Str.regexp "[ \t]+") line with
    "wsk\\" :: [] -> handle_wsk_start()
    | "name:" :: name :: [] -> handle_name name
    | "robber:" :: name :: [] -> handle_robber name
    | "cop:" :: name :: [] -> handle_cop name
    | "nod\\" :: [] -> handle_nodes_start()
    | "nod:" :: name :: tag :: x :: y :: [] -> handle_node name (node_tag_of_string tag)
    (int_of_string x) (int_of_string y)


    Since I don't have match in Lisp, I wondered what would be a good way to do this in Lisp. I decided to do it with a macro. I keep a global hash table that maps keywords like "wsk\\" and "name:" to functions that handle the data for those keywords. I use a macro named defparse to create a function where instead of the name of the function, I supply the keyword name. Here is how I handle the "nod:" line for example:

    (defparse "nod:" (name tag x# y#)
    (push (make-instance 'node :name name :tag tag :x x# :y y#) *build-node-list*))

    One of the interesting things here is that you are seeing the function that handles the line, not just my definition of what is on the "nod:" line. In the ocaml version, I had all the keywords in one big match statement and made them call separate functions. I could have implemented the functions in the match statements but I thought it might look ugly. I like the Lisp solution here because I define the line contents and the function in one place, but it isn't all cluttered with other line definitions.

    There are a couple of things going on in defparse. First, it turns the function body into a lambda expression that is mapped in the global keyword hash table. Next, it looks for parameters whose names end in # and automatically converts them from string to integer. This is what the expansion of my (defparse "nod:" ...) looks like:

    (SETF (GETHASH "nod:" *COMMAND-HANDLERS*)
    (LAMBDA (NAME TAG #:G4171 #:G4172)
    (LET ((|X#| (PARSE-INTEGER #:G4171)) (|Y#| (PARSE-INTEGER #:G4172)))
    (PUSH (MAKE-INSTANCE 'NODE :NAME NAME :TAG TAG :X |X#| :Y |Y#|)
    *BUILD-NODE-LIST*))))

    Those #:G4171 and #:G4172 names are unique symbol names generated by the gensym function. A macro in Lisp is basically a function whose return value is code that will be compiled. This means that it blends in well with the language - there are a few syntactic differences having to do with quoting, but otherwise, you don't learn a separate macro language. One of the difficulties is that the macros run at compile time, so you can make use of all the functions you have defined (as far as I can tell, at least).

    For the curious, here is my defparse macro:

    (defmacro defparse (command param-list &body body)
    (let* ((int-params (remove-if-not
    #'(lambda (param)
    (let ((ps (symbol-name param)))
    (char= (elt ps (1- (length ps))) #\#))) param-list))
    (param-map (mapcar #'(lambda (x) (cons x (gensym))) int-params))
    (param-list-with-syms (mapcar
    #'(lambda (param) (if (assoc param param-map) (cdr (assoc param param-map)) param))
    param-list))
    (let-list (mapcar #'(lambda (pg) (list (car pg) (list 'parse-integer (cdr pg)))) param-map)))
    `(setf (gethash ,command *command-handlers*)
    (lambda ,param-list-with-syms
    (let ,let-list ,@body)))))


    The thing I am a bit curious about is whether this is a Lispy way to approach the problem. Would a Lisp pro nod his or her head at this, or start to gag?

    This page is powered by Blogger. Isn't yours?