From KLOTZatMIT-MC Tue Jun 30 00:00:00 1981 From: KLOTZatMIT-MC (Leigh L. Klotz) Date: 30 June 1981, 00:00 Subject: ellen's hactrn Message-ID: The same thing happened to me recently. There are some tourists on ai who have written this hack which runs as a disowned job and periodically sends random messages to the hackee. [Message from your Terminal]... Someone tried to do this to me, but the program seems not to work quite right, or something. It left me in CLOBI. I killed the job and it went away. I don't know that this is the same thing. Right now when I try to send to ellen, my hactrn hangs at the place where it is waiting to send. I do not know what failure code the open returned, because she had logged out by the time I tried to do it again, and the register had been trashed. But since it was looping, it must have been %EFLDIR or %ENAFL. Her old, bad hactrn was trying to read from CLO: _CLI_;ELLEN HACTRN, and had it open in .BAI. I guess this is why my sends to her didn't work, even though the hactrn which was trying to send to itself became HACTRO. That doesn't seem quite right, though. HCLOB was 0 and HHACK was -1. I'm not sure why her hactrn was trying to open itself. In the process of trying to look back up the stack to see who had opened this, I managed to trash the job. I did a CTYPE 41 X that I meant to do in another job in the hactro, and of course trashed it. It dumped itself, but I doubt that's of any use. It was trying to do a .IOT on it at FDRCO4, having been called at FDRCO1+3. I didn't get any farther. Leigh. From ELLEN at MIT-MC Tue Jun 30 00:00:00 1981 From: ELLEN at MIT-MC (ELLEN at MIT-MC) Date: 30 Jun 1981 00:00 Subject: No subject Message-ID: Three times in succession tonight I have control-Z'd out of an EMACS and had my DDT hang, typing control-G had no effect. Typing echoed but $$V or other commands had no effect. Upon hanging up (I am on a dialup line) and reconnecting, leaving my tree detached, I find on looking at a peek that my HACTRO is in state "*CLOBI" and it says "CLI" to the right of the "Time PIs" column. This has not happened to me from anyplace except exiting EMACS, however the EMACS job appears unaffected, as I can snarf it from the HACTRO and continue using it. From EAKatMIT-MC Mon Jun 29 00:00:00 1981 From: EAKatMIT-MC (Earl A. Killian) Date: 29 June 1981, 00:00 Subject: enquiry Message-ID: Use the ALLOC command of the DUMP program. From RICHatMIT-AI Mon Jun 29 00:00:00 1981 From: RICHatMIT-AI (Charles Rich) Date: 29 June 1981, 00:00 Subject: enquiry Message-ID: I used to know, but now cannot remember and cannot find in the documentation, the answer to this question: How do I allocate a directory to a given device? Thanks, Chuck. From MOON at MIT-MC Wed Jun 24 00:00:00 1981 From: MOON at MIT-MC (MOON at MIT-MC) Date: 24 Jun 1981 00:00 Subject: MC lack of response on the Chaosnet Message-ID: I have noticed this in the past, and it is happening again. MC is at this moment down. It does not respond to the :TIMES program, nor the CTIMES program, nor to attempts to connect. It does however, answer a status packet (generated with :MOON;CHARFC MC STATUS ). I assume it is the 11 that handles the chaos net that is responsible for this. Why do you assume that? It's not true. If this is so, could it be fixed? MC should respond to the status for MC, and the chaos front end should respond to STATUS only sent to it, not to MC also. I realize this may be an efficiency hack, but it is doing the wrong thing. The problem was undoubtedly that there were no free job slots and therefore server processes could not be created. The STATUS response is generated directly by the system rather than by a server process, for exactly this reason: so that it will always work if the system is up, even when it is overloaded. From DCP at MIT-MC Wed Jun 24 00:00:00 1981 From: DCP at MIT-MC (DCP at MIT-MC) Date: 24 Jun 1981 00:00 Subject: No subject Message-ID: I guess that last bug report is slightly bogus. MC was not down, but it was dead to the outside world. From DCPatMIT-AI Wed Jun 24 00:00:00 1981 From: DCPatMIT-AI (David C. Plummer) Date: 24 June 1981, 00:00 Subject: No subject Message-ID: I have noticed this in the past, and it is happening again. MC is at this moment down. It does not respond to the :TIMES program, nor the CTIMES program, nor to attempts to connect. It does however, answer a status packet (generated with :MOON;CHARFC MC STATUS ). I assume it is the 11 that handles the chaos net that is responsible for this. If this is so, could it be fixed? MC should respond to the status for MC, and the chaos front end should respond to STATUS only sent to it, not to MC also. I realize this may be an efficiency hack, but it is doing the wrong thing. From SKatMIT-MC Wed Jun 24 00:00:00 1981 From: SKatMIT-MC (Steven T. Kirsch) Date: 24 June 1981, 00:00 Subject: random clobberage on MC? Message-ID: This could be all in my head, but I noticed two clobberages today that struck me as a little strange: The end of SK;SK OBABYL appears to have some lines missing from the last message. SK;PASCAL LISP seems to be missing (or transpose and missing) characters at the end. You can read this file and see for yourself (it's only about 10 lines long). If comparision with the tape dump (Tape 325, file #95) shows a discrepancy, we are all in a lot of trouble (since ITS says, and I agree, that the file hasn't been modified in any normal way since it was dumped). I could be imagining all this and could have accidently clobbered these files myself. I thought I'd bring it up just to be on the safe side. From ALANatMIT-MC Sat Jun 20 00:00:00 1981 From: ALANatMIT-MC (Alan Bawden) Date: 20 June 1981, 00:00 Subject: opening the CLI device Message-ID: When opening the CLI device fails because the core link already exists (returning %ENAFL) the target job gets interrupted anyway. From ALAN at MIT-MC Sat Jun 20 00:00:00 1981 From: ALAN at MIT-MC (ALAN at MIT-MC) Date: 20 Jun 1981 00:00 Subject: recent MC crash & .getsys Message-ID: The recent MC crash where the system died trying to gun a garbage job tree for me may be even more related to me than that. Earlier that evening I discovered that the .getsys uuo was acting strangely (never mind WHY I discovered that). It seems that it no longer works for some of the random sixbit keywords you can give it (for example: DEVS and NCALLS work, but GETS doesn't (!)). In the cases where it doesn't work it has a tendency to trash the two accumulators involved (aobjn pointer and sixbit key). Now, I was just going to ignore this all, figuring that nobody uses the thing anyway, but after talking with Moon it sounds like one of the garbaged words in the job in question was trashed with one of MY aobjn pointers from a failing .getsys! Perhaps it is worth someone's time to look into this. From RLB at MIT-MC Thu Jun 18 00:00:00 1981 From: RLB at MIT-MC (RLB at MIT-MC) Date: 18 Jun 1981 00:00 Subject: No subject Message-ID: Recent MC crash dumped as CRASH;CHACK1 14 Halt at CHACK1+14 Notes in the crash log. Acc U pointed to QCP HACTRP which ALAN says he was trying to gun when crash happened. From DCPatMIT-AI Sat Jun 13 00:00:00 1981 From: DCPatMIT-AI (David C. Plummer) Date: 13 June 1981, 00:00 Subject: No subject Message-ID: MC has been having problems since about noon today (Saturday). It revived itself a few dozen times in the course of a couple hours. I may have been the cause of this. I have been hacking with the CLO: device this afternoon. Does anybody know if there are any bugs with this beast? I will refrain from using it until I hear a go ahead. If you want details on what I was doing with it, just ask. From moon5atMIT-AI Thu Jun 11 00:00:00 1981 From: moon5atMIT-AI (David A. Moon) Date: 11 June 1981, 00:00 Subject: Output reset doing weird things on MC tonight Message-ID: I broke something in the process of fixing another bug. It's fixed now. From SK at MIT-MC Thu Jun 11 00:00:00 1981 From: SK at MIT-MC (SK at MIT-MC) Date: 11 Jun 1981 00:00 Subject: last message Message-ID: re-attaching did work in one case. From SK at MIT-MC Thu Jun 11 00:00:00 1981 From: SK at MIT-MC (SK at MIT-MC) Date: 11 Jun 1981 00:00 Subject: did something get "fixed" recently? Message-ID: In logging in through the SAIL tip as usual, I discovered today that ^S'ing the "welcome" message would hang my terminal. This happened consistently. Up till today, I have had no problems ^Sing the message and I do it almost every time I log in. Also, typing Q in peek while it attempted to typeout caused me to hang. Reowning the detached tree did not make things work (still dead) in one case, and I forgot what happened in the other case. In any case, I think I had to close and re-open the connection about 5 times today. Let me know if you need more info. From GJC at MIT-MC Wed Jun 10 00:00:00 1981 From: GJC at MIT-MC (GJC at MIT-MC) Date: 10 Jun 1981 00:00 Subject: No subject Message-ID: Some people had their MAIL file on the PACK NOT AVAILABLE disk. If they recieved any mail since then, what happens is that instead of COMSAT specially handling the error, it takes it as FILE-NOT-FOUND and simply creates a new mail file. People lose their mail. From JPG at MIT-MC Wed Jun 10 00:00:00 1981 From: JPG at MIT-MC (JPG at MIT-MC) Date: 10 Jun 1981 00:00 Subject: No subject Message-ID: As I have stated many times in the past, I certainly don't care whether SECOND: is an RP04 or a T-300. From JNCatMIT-XX Wed Jun 10 00:00:00 1981 From: JNCatMIT-XX (J. Noel Chiappa) Date: 10 Jun 1981, 00:00 Subject: SECOND: device on MC: Message-ID: I've been saying that the likelihood of a T-300 control path failure is low for a while; the observed failure rates at this time seem to obviate theoretical discussions of the matter. I only saw that break once, and then it was a matter of switches wrong, and not anything broken. So saying that the T-300's are less reliable on those grounds isn't indicated. I have said that I thought that the reap path ought to have SECOND: somewhere after THIRD:, but JPG didn't like the idea the last time I mentioned it. ------- From CBF at MIT-MC Wed Jun 10 00:00:00 1981 From: CBF at MIT-MC (CBF at MIT-MC) Date: 10 Jun 1981 00:00 Subject: SECOND: device on MC: Message-ID: Hmm, now that I think about it you're right; but I didn't even notice it. Clearly my analysis is wrong. The reason I think it is better to have a Trident be SECOND: (ie. be first to be reaped to) because when a Trident goes down you get the choice of which Trident you want to leave down. When pack 13 goes down the way things work now, it is the most recently modified files that must go away. With the other scheme there is one, posible two whole packs of data more recent than the one lost. From MOON at MIT-MC Tue Jun 9 00:00:00 1981 From: MOON at MIT-MC (MOON at MIT-MC) Date: 09 Jun 1981 00:00 Subject: SECOND: device on MC: Message-ID: The two times before this it was a Trident that went down. From CBF at MIT-MC Tue Jun 9 00:00:00 1981 From: CBF at MIT-MC (CBF at MIT-MC) Date: 09 Jun 1981 00:00 Subject: SECOND: device on MC: Message-ID: I think by this time it ought to be clear to all that the likelihood of one of the RP04's being down probably far exceeds the likelihood of one of the Trident's being down. Therefore I don't understand why it is still first in the migration path. One could perhaps argue that the Tridents might be less reliable since a failure of any item in the chain of hardware connecting the Tridents can bring them both down (DL10, I/O 11, Century controller), I might suggest that considering the percentage of storage those devices represent, the system will effectively be useless to most users anyway. Therefore I think the only relevant probability to consider is one Trident drive vs one RP04 going down, and I think its obvious which is the more reliable of the two. From KRONJatMIT-MC Tue Jun 9 00:00:00 1981 From: KRONJatMIT-MC (David Eppstein) Date: 9 June 1981, 00:00 Subject: Indirect cursor addressing Message-ID: I just checked again. MC has the problem too. From CSTACYatMIT-AI Sun Jun 7 00:00:00 1981 From: CSTACYatMIT-AI (Christopher C. Stacy) Date: 7 June 1981, 00:00 Subject: No subject Message-ID: on AI, 230479. memory errors in 47.4 hours. (!?!??) Well, at least ONE memory error, anyway. Chris From MARIAatMIT-AI Thu Jun 4 00:00:00 1981 From: MARIAatMIT-AI (Maria Simi) Date: 4 June 1981, 00:00 Subject: No subject Message-ID: Please, somebody do something about the terminal in 939!!!! The line is broken. Thanks, maria. From FONERatMIT-AI Mon Jun 1 00:00:00 1981 From: FONERatMIT-AI (Leonard N. Foner) Date: 1 June 1981, 00:00 Subject: This is not a bug, but a question to persons unknown Message-ID: I have been occasionally curious as to how the fair share is determined. Any theories I nurtured about its being a measure of idle CPU time or anything of a similar sort were shattered tonight when I noticed upon login that the fair share was 103%. My question, of course, is just how this number is determined. Any help for this rather pointless question would be well received. Thanx.