Letters to the editor may be sent via email to cujed@rdpub.com, or via the postal service to Letters to the Editor, C/C++ Users Journal, 1601 W. 23rd St., Ste 200, Lawrence, KS 66046-2700.
Dear Editor:
Having been a loyal subscriber for the last 40 years or so, I have finally read an article that has prompted me to write to you. I am referring, as you might guess, to the article in your November 1995 issue entitled "C Database Programming with ODBC."
The author states, "Resist the temptation to rush ahead and start designing forms and reports ... Concentrate first and foremost on the design of the tables and the relationships between them." My stars! In my 60 or so years of programming, if I have learned one thing (and I may have only learned one thing), then that thing is: when designing a new system, leave the database design for the last. By creating prototypes of screens and reports first, it becomes ever so much easier to accurately identify fields and the relationships between them. Of course, that could just be me.
Later in the same article, the author mentions finding problems and solutions with the Access 2.0 ODBC driver. He never mentions what exactly the problems were, nor what solutions he found. Land sakes! Why even mention it in the first place?
Overall, I did not find this article to be up to your normal journalistic standards. But, being the reasonable sort of person I am, I suppose that after 70 years of publishing the magazine, month after month after month, one inferior article was bound to make it past the editorial axe.
Please, don't let it happen again.
Sincerely,
L. J. SellarsP.S. Is it true you cancel the subscriptions of readers who complain?
In my 90 or so years of preaching about programming, I have learned that some people work best bottom up and others work best top down. Match the person to the problem and you have a success. Get a mismatch and you either get a failed project or a better educated programmer. Sorry you didn't like the article, and we can't promise that you will never be disappointed again. But just this once, we won't cancel your subscription. pjp
Editor,
In the sidebar "Multithreading Do's and Don'ts" ("Multithreading in C++" by Jim Dugger, CUJ, November 1995) the author discusses using mutexes to protect the data members of classes for objects that may be used by more than one thread. In the example presented, the author uses a naming scheme for the mutexes which requires the class user to generate a unique name for each class object. I have a couple of suggestions to hide this from the user. First, you can use an unnamed mutex. The unnamed mutex would have to be allocated in the class's constructor, and used later to block. (I don't have experience with OS/2 programming, so they may not be available in that environment.) This would, however, need some redesign of the mutex class as presented in the article. The second solution is to use the value of the this pointer for the object in the derivation of the name for the mutex. For example:
sprintf(mutexname,"%08lx intstack", (long)this);
Robert Mashlan
Bill:
Lots of kudoi to Bobby Schmidt for his wonderful debut. He is forgiven his diacritical exuberance (the spurious ^ in dj^ vu). But two minor quibbles:
1. Is the learning curve correct? See my entry in The Computer Contradictionary, (MIT) page 113. If we are plotting K, knowledge-gained (y-axis) against t, time-expended (x-axis), we seem to have a rapid initial "mastery" of C/C++ followed by a declining acquisition rate. My own curve is different. There is also the hysteresial extension showing how we forget or retain the stuff as time goes by.
2. I query the relevance of Turing's Halting Problem to determining or failing to determine whether an arbitrary C (or any lang.) program is "correct" (standards conforming). Any number of conforming programs can fail to halt (e.g., while(1) {}); in other words, we should not confuse "Turing computability" with "ANSI conformance." (See e.g., page 411, Programming Languages, Design & Implementation, 3rd Ed., T. W. Pratt & M. V. Zelkowitz, Prentice-Hall, 1995.)
Perhaps Bobby could discuss the main problem with all comp. lang. specs. (whether expressed in "English" or some metalanguage but using a finite number of "rules"): proving consistency. That is, can we be certain that rule n does not contradict rule m. I recall seeing an example of a C-standard inconsistency in Peter van Linden's Expert C Programming Deep C Secrets but forget the details. You can patch the specs but may introduce a fresh inconsistency (a familiar situation for all patchers, and echoes of K. G[sinvcircumflex]del). If (big IF) you have consistent specs, I think you can avoid the endless layers of turtles. Any given finite string purporting to be a program can be checked against the rules (manually, of course, we have plenty of time we may wish to automate the process later!) and the outcome is guaranteed (conforming or not-conforming) in finite time. Note that we can't use this strategy to establish (in finite time) if the string represents a computable (halting) function.
PAX etc.
Stan Kelly-Bootle
htpp://www.crl.com/~skbThe errant circumflex was probably my fault. The Halting Problem in language standards is to get the committees to stop tweaking the language and ship the spec. pjp
Dear Sirs,
Do you provide a per year table of contents of The C/C++ Users Journal somewhere on the net?
We are readers of your journal for some time now and find it very useful in our daily work. Searching for a specific article which was published some months ago is rather difficult and a table of content file would help very much.
Thank you in advance,
Uwe Fritsch.
ASIC Software Support Group
u.fritsch@fml.co.uk
Fujitsu Mikroelektronik GmbH
Dreieich-Buchschlag, Germany
Good morning,
I'm looking for a complete, i.e. all issues, CUJ index, on paper, disk or even CD-ROM (in sequence of desirability). Is there such an animal, or one approaching it?
Regards,
Edmund H. RammYou can begin by looking at the newly established CUJ web page its URL is http://www.cuj.com. It includes a file searchable by Author and Title, though it is perhaps not your idea of a full-fledged index. We also sell a CUJ index on floppy disk, updated through 1994; its price is $29.95 plus $3.50 shipping and handling. Finally, by the time you read this, we should be selling our first ever CUJ CD-ROM, covering all issues from January 1990 to December 1995. To order, contact Miller Freeman at 913-841-1631, or fax: 913-841-2624, or write to C/C++ Users Journal, 1601 W. 23rd St., Ste. 200, Lawrence, KS 66046; or order through our web page. mb
Editor,
I'm sure I won't be the only one to point this out, but your statement that HP has "released their implemenation of STL into the public domain" is simply not true (Standard C/C++," CUJ, December 1195). According to your description, HP retains copyright on the STL, while allowing the public to use it for free. This is not "in the public domain." "Public domain" means there is no copyright on something; therefore, "public domain" and "copyright" are mutually exclusive. Anything in the public domain is always free, as no one retains rights to it, but not all that is free is public domain.
I've seen this error made far too many times in the shareware/freeware market. I'm surprised to see you caught by it as well.
Andy Lester
alester@fsc.follett.comI do know better an unfortunate lapse. And sadly, nobody else pointed out the error. Thanks. pjp
Greetings,
Users of Microsoft C/C++ compilers should be made aware that the example given for _memmax has some serious defects. I first detected the problems about three and a half years ago, in the printed docs for MSC/C++ 7.0, and see that the code is unchanged in the Visual C++ 1.5 online help.
Background:
_memavail returns the total amount of available memory on the near heap, in bytes.
_memmax returns the size of the largest block of available memory on the near heap, in bytes.
Listing 1 shows the sample program for _memmax from Microsoft. A perspicacious programmer will spot a difficulty with this code immediately. What is less apparent are the reasons for its anomalous behaviour. Specifically, in spite of the obvious coding error the program appears to work when compiled and executed! Further, if the error is corrected the program seems not to work in small or medium models! But with the code corrected it does work in large and compact models!
I suppose I should have reported this problem and its resolution to Microsoft long ago, but it's almost a classic and I rather hate to see it disappear. It illustrates quite graphically why it pays to have a clear grasp of what transpires in generated code.
Problem 1: The obvious coding error.
p = _nmalloc(contig * sizeof(int));Recall that _memmax returns the size of the largest chunk of available memory on the near heap. That's all there is and it's expressed in the smallest unit of measure: bytes. But here the example is multiplying the value returned by _memmax in contig by the size of an integer, which on the target platform that this compiler is designed for is two bytes long. Therefore the _nmalloc is requesting twice as much memory as is available in one chunk!
Problem 2: But wait a minute! If that's the case, how come it works when compiled as is? It clearly says,
Largest block of available memory is 61612 bytes long Maximum allocation succeeded
(Actual size may vary.)Well, the _nmalloc was successful it allocated the requested amount. But that amount isn't what was intended or expected. Consider some typical values if this is compiled and executed.
The _nmalloc function takes an argument of size_t type, which is defined as an unsigned integer. The maximum value an unsigned integer can hold (on IBM PCs under DOS) is 65,535. This is also the maximum size of a memory segment. By requesting _memmax * sizeof(int) we were asking for 61,612 * 2 = 123,224 bytes. Any amount over 65,535 is lost (truncated), and we are left with 123,224 - 65,536 = 57,688, the amount actually allocated by _nmalloc. Note that there were no run-time error messages or warnings generated!
To confirm this assessment, try adding this command before the free(p):
printf("_msize of p = %u\n", _msize(p) );
Convinced? Fine. Let's remove the sizeof(int) and all should be fine, right? Wrong!Problem 3: After removing the sizeof, and rebuilding in small or medium model, we get results such as this when we run it:
Largest block of available memory is 61,512 bytes long Error with malloc (should neveroccur)
Now what? Before, the allocation worked when it shouldn't have, now it doesn't when it should! Look carefully at the program, and note the order of events:contig = _memmax(); printf( "Largest block of available memory is %u bytes long\n", contig ); if( contig ) { p = _nmalloc( contig ) );
See the problem? Some runtime library routines call malloc to acquire memory for their own use, and printf is one of these routines. The first time printf is invoked in a program, it acquires a buffer of roughly 500+ bytes from the default heap. In small and medium models this is the near heap! When we try the _nmalloc we no longer have contig bytes available since the largest contiguous chunk has been reduced by the hidden malloc issued by printf. If we recompile using the large memory model, printf takes its memory from the far heap, and our program works fine. Note that this behavior vis-a-vis which heap is used is compiler-dependent. Turbo C does not appear to take memory from the near heap for printf in small and medium models.To illustrate that printf only does the malloc the first time it's called in a program, try fixing the problem by putting another printf just ahead of the _memmax.
printf("\nThis should work fine now!\n"); contig = _memmax(); printf( "Largest block of available memory is %u bytes long\n", contig ); if( contig ) { p = _nmalloc( contig );
You should find that the original printf now no longer reduces the amount returned by _nmalloc. Only the first execution of printf does a malloc. My tests yielded the same results (actual values varied) using QuickC 2.50, MSC/C++ 7.0 and VC++ 1.5 (MSC++ 8.0c).Wayne A. King
wayne.king@canrem.com
ba994@torfree.net
70022.2700@compuserve.comJerry Weinberg loves to tell programmers, "Never stop at one bug." I have learned never to try to use all available space on a heap. pjp
Sir:
I agree with Mr. Plauger's response to the letter in the December 1995 issue of The C/C++ Users Journal. I believe that the precious space in the magazine should be used for articles and code, not the writers' pictures.
If you are like me, then you are on lots of mailing lists for technical conferences. Often, the brochures for these conferences include pictures of the speakers. As it turns out, the speakers are often the same people who write columns in The C/C++ Users Journal. Over the years, I have seen photos of most of the regular contributors. With the recent staffing changes, I am going to scan those conference brochures carefully to see if I can get a glimpse of the new columnists.
This approach results in a winning situation for everyone. You (and I) get to see what the columnists look like, and we get a magazine that is packed with great articles and code.
Sincerely,
Mike Calwas
Anitasdad@aol.comThe only problem with that approach is that we speakers/writers sometimes cheat. I haven't update my standard promo picture since I acquired bifocals and a second chin. Dan Saks does, however, still look like his stock photo. pjp
Dear pjp,
Nice article ("Standard C/C++: The Standard Template Library, CUJ December 1995).
One thing that puzzles me that you might comment on is the idea of distributing the STL library as headers with templates and inline functions. In my experience of C++, working for several years for a company with millions of lines of C++ code in use internally, users simply won't tolerate huge code blowups resulting from multiple duplicate expansions of large inline functions.
Even if you disable inlining in the compiler, you will still have a copy per object file of each function used in that object.
So I'd be curious as to how the advocates of STL expect the performance issues to be resolved.
Glen McCluskey
glenm@glenmccl.comThe basic attitude is that compiler vendors will just have to optimize better, to meet the needs of customers who insist on using templates heavily. pjp
Greetings,
Marc Briand's comments at the beginning of the December Victor Volkman column ("New Releases") leave me wondering about the "early" CUG Library. You claim that you have "stopped distributing some of the really early volumes mostly CP/M stuff, or worse. No one has howled." You may not be aware (although I don't know how!) that CP/M still has a very strong following. See, for example, comp.os.cpm on Usenet. Many of them might take offense at the rather negative tone of your comment. May I remind you that one of the earliest public domain C compilers ran in the CP/M environment? It was pretty crude, indeed, but I feel that the early history of microprocessor operating systems, and yes, even C, deserves to be treated with more respect than this comment implies.
Roger Hansom
rzh@dgsys.comSorry about that. I thought that OS was history. As penance, I will drink a case of Jolt Cola Decaf. mb
Dear Editor:
I enjoyed reading about ODBC. Your magazine is always a treasure trove of relevant programming information. I was surprised to see the sequence of function calls in the article "C Database Programming with ODBC" by Alex Ragen. I work as a COBOL programmer and use embedded SQL extensively.
On the mainframe we would typically do a Select into call if we "know" that there is just one result row out there. Otherwise, we would have to declare a cursor and then open it, fetch the row(s) and then close the cursor.
I have also been studying the CRecordSet and CDatabase classes for MFC, which allow a programmer to use classwizard to setup the interface to ODBC. I wonder if Mr. Ragen has tried these classes as another method to access the data. Or maybe he chooses C to have lower overhead...
One thing I'd like to see in your magazine is how to access Mainframe data from the PC. I have heard of the use of ODBC to access DB/2 SQL. I also wonder if it makes more sense to host the database on the Mainframe, or would it make more sense to host the data on a Windows NT SQL server, or Sun WorkStation. But I suppose such questions have answers depending on which marketing department is funding the Benchmark tests...
Thanks again for a relevant, interesting magazine.
Chris Mason