1st April 2003

THE TRIBUNAL RESUMED AS FOLLOWS ON TUESDAY,
1ST APRIL, 2003, AT 11AM:

MS. O’BRIEN: May it please you, Sir. Just to let you know, Sir, and also for the benefit of all the persons that are assembled here. There will be a short delay in the commencement of the sittings this morning just to enable the finalisation of matters with regard to Mr. McMahon.

.

CHAIRMAN: I think on foot of that it will be anticipated that we will be going at twenty past or within a minute of that. Very good. Thank you.

.

THE TRIBUNAL THEN ADJOURNED AND RESUMED AS FOLLOWS:

.

CONTINUATION OF EXAMINATION OF SEAN McMAHON BY MR. HEALY:

.

CHAIRMAN: Thanks very much for coming back, Mr. McMahon. I hope your personal situation has eased a little bit since last week.

.

A. Thank you, Chairman. Thank you for your indulgence and can I thank everybody else present for the same.

.

MR. HEALY: Mr. McMahon, as you know when you were last giving evidence, the Tribunal had retrieved from a whole load of Departmental documents to which not much attention had previously been paid. A number of documents which had been brought to your attention before you actually came into the witness-box, but because you had just come into the witness-box you weren’t in a position to deal with them and in addition, it felt it necessary that Mr. O’Callaghan, who had generated the document, should come back and give evidence about the documents, which he did, I think, on the day when you — following, the next day after you last gave evidence, which was Tuesday of last week.

.

Now, since then, not only has Mr. O’Callaghan given his evidence, and given further quite lengthy evidence, Mr. McQuaid has also given evidence, and the Tribunal has looked back over a lot of documentation it has received with a view to trying to ascertain whether there was other documentation which should be retrieved and brought to the fore as it were. Now, that exercise has prompted the Tribunal to conduct an even wider review of documentation to be retrieved, documentation which has been retrieved, evidence which has been given and so forth, and before asking you any questions, what I would propose to do is try to outline where, or the status, if you like, of documentation and other information available to the Tribunal in relation to a number of aspects of the process.

.

This is a sort of, it’s not an Opening Statement, but it’s to some extent a statement of the status of certain aspects of the review being conducted by the Tribunal as of this moment.

.

In going through this, if there is anything I am saying that you feel the Tribunal has got completely wrong, the time to stop me is when I am saying it. If you think there are things you want to add or things you want to pull me up on, I think it’s better if you wait until I am finished for the sake of good order. You don’t actually have to sit there if you want, but it might be as useful a place as any to sit.

.

I’ll be referring to a number of documents. Most of those documents you’ll have seen, but not all of them. I am sure any new documents I am referring to, you will be able to come to grips with fairly quickly. In any event you will have the lunchtime adjournment to have a look at them in more detail.

.

Now, as I said a moment ago, in the Tribunal’s Opening Statement at the commencement of these sittings, the information which up to that date had been assembled by the Tribunal to be led in public was set out in chronological form. That information, in the main, had come from two sources: It was extracted or it had been extracted from documentation made available on the one hand by the Department, and on the other hand, documents made available by participants in the evaluation process. By that I mean applicants.

.

In addition of course, information was available to the Tribunal as a result of interviews with officials in the Department, and the Tribunal has had the opportunity of interviewing almost every relevant civil servant involved in the process. And no request for assistance from any civil servant has been rejected.

.

The Tribunal has also had some information from participants and from individuals associated with participants in the applicant consortia.

.

Now, the Tribunal has obtained a huge volume of documentation and a huge amount of information which, as I have already said, not all of which has been led in evidence. In the course of these sittings, since they commenced last year, more information came to hand in the form of evidence given by civil servants involved in the process. From time to time, as you will know and as other people here will know, this prompted the Tribunal to re-examine the documentation assembled for the purpose of these sittings and also to retrieve other documentation from the huge volume of material made available by the Department and by participants, where documentation which had initially seemed to be of limited relevance came more clearly into focus as a result either of new information or new documentation or new evidence.

.

In endeavouring to form a comprehensive view of the evaluation process, the Tribunal has been, to some extent at least, hampered by the fact that Mr. Andersen has not made himself available to give evidence or to assist the Tribunal. He did provide the Tribunal with certain information in documentary form, he attended some meetings with members of the Tribunal team, but it now seems clear that he is unlikely to make himself available to the Tribunal in the future. The evidence to date has been of real value to the Tribunal in assisting the Tribunal to obtain and ultimately, hopefully, to present a full view of the process. In particular, the additional information and the opportunities that the Tribunal has had to focus or to refocus on certain aspects of the process has enabled it to form a better impression or at least to assemble information which hopefully will enable the formation of a clearer impression of the development of the process, in particular in the period immediately following the involvement of AMI; in other words, in the period in which Mr. Andersen was most intensively involved in the process.

.

Now, there are a number of aspects of the process upon which I think it would be worthwhile to refocus in light of the evidence to date and in light of some of the evidence given by Mr. O’Callaghan on his return to the witness-stand, and also by Mr. McQuaid. And these aspects are as follows:

.

Firstly, as you will be aware, Mr. Andersen proposed and the Evaluation Team adopted a multi-stage evaluation process involving a quantitative —

.

MR. NESBITT: I am sorry to interrupt My Friend, but I think it’s appropriate that I say something at this point in time.

.

Obviously the Tribunal counsel must decide how best to run the inquiry, and I don’t wish to unnecessarily reach into that, but I think it’s very unfair to have asked this witness to comment as this matter goes along. If this is a supplement to the Opening Statement or some presentation to assist the witness understand where the Tribunal counsel are coming from to assist him giving his evidence, so be it. But I think it inappropriate, with respect, to ask this witness on the cold, so to speak, to have to comment as it goes along. Undoubtedly when he has heard what’s said and had a chance to think about it, he may have some assistance to give the Tribunal. I think it’s unfair that he should be asked to comment as it progresses.

.

The other thing I have to respectfully submit is unfair and taking into account the interesting analysis of public inquiries and sittings, that’s apparent in the consultation paper that was launched last night is, at some point in time there has to be a fairness of procedure at some level, and to ask a witness such as the current witness to be on the hoof, so to speak, having to deal with what may be a complicated and complex series of matters seems to go beyond any analysis of what would be fair for him, and I’d ask that he be relieved from the obligation to comment until we have heard the opening submission, and if he needs time to think about anything, he can ask for it then, and it shouldn’t proceed other than that. That’s all I’d ask for at this time, Mr. Chairman.

.

CHAIRMAN: Mr. Nesbitt, my understanding was that there was nothing in the slightest way oppressive or precipitous in the course proposed by Mr. Healy, in that all he had said by way of inviting Mr. McMahon at his option to stay in the witness-box, was if something appeared flagrantly wrong, that Mr. McMahon would have the opportunity to simply say no, there and then. Insofar as it may well be that Mr. McMahon’s own preference would be to hear and digest the whole matter, I certainly am perfectly content if you are happy with that, Mr. McMahon, because in fact I think it will take another half hour until Mr. Healy’s remarks are concluded.

.

It’s probably a little unreal, unless you had something you enormously wanted to intervene on, to ask you to remain in the witness-box. So if you are just as happy, I am perfectly content that you might maybe retire back to the witness portion of the hall.

.

A. I am content, Chairman, to sit anywhere. I don’t propose to intervene —

.

CHAIRMAN: Lest it add to Mr. Nesbitt’s angst, perhaps you might go back to the witness benches for the moment. Thank you.

.

THE WITNESS THEN WITHDREW FROM THE WITNESS-BOX

.

MR. HEALY: I think I was saying that there were a number of aspects of the process upon which I thought it would be worthwhile to refocus in light of the evidence to date, and in light of some of the evidence given by Mr. O’Callaghan on his return to the witness-stand, and also Mr. McQuaid.

.

And firstly I think I mentioned that Mr. Andersen proposed, and the Evaluation Team, in this case, adopted a multi-stage evaluation process involving a quantitative and a qualitative evaluation. The method proposed and formally adopted does not appear to have been followed.

.

Secondly, the evaluation process involved or envisaged the application of weightings to a number of criteria listed in the RFP and prioritised in the RFP in accordance with a Government decision. This is the paragraph 19 listing of criteria. It appears to be impossible to see for certain what weightings were applied, or indeed, agreed, and it is far from clear that the agreed weightings were ultimately applied to the relevant parts of the process.

.

Thirdly, the finalisation of the report and the presentation of the results involved a conversion of what appears to have been intended as a graded result expressed in letters to one which was expressed in numbers or in numerical terms. There have been suggestions in the information available to the Tribunal that this numerical conversion was either inappropriate or that it may have even distorted the result. While I am on the question of the report, it would appear that the result appears to have been announced and certainly appears to have been brought to the attention of the Minister for onward transmission to the subcommittee and the Government prior to a final report actually having been physically made available.

.

All of this should be viewed in circumstances in which, from information made available by civil servants and from documentation made available by the Department, it would appear that the Minister intervened in what was supposed to be a sealed process on a number of occasions. It also appears from information made available to the Tribunal from other sources, that is to say from participants, that the Minister had intervened or had access to the process.

.

As we know from documentation made available by the participants, the Tribunal is also aware that parallel to the progress of the evaluation, there was a course of events involving the membership of the Esat Digifone consortium and the financing of the consortium, and in particular, the finances of one member of the consortium, which were not brought to the attention of the Evaluation Team.

.

Lastly, and this may be only an incidental point, but it could assume some significance, the role of Andersen Management itself, and in particular, the role of Mr. Andersen is far from clear. It is not clear whether Mr. Andersen was a full member of the Project Team, whether his colleagues were full members of the Project Team, or whether he or they were merely independent and outside advisers to the team.

.

If I could just deal with that last point firstly. In Appendix 2 to the final report, which contains a description of the methodology applied, Mr. Andersen, at page 2, in Appendix 2, describes the Project Team as comprising members from the three telecoms divisions of the Department of Transport, Energy and Communications, the Department of Finance, and affiliated consultants from Andersen Management International. At the same time, as we know from a number of Dail statements and other statements, there appears to have been a suggestion that Mr. Andersen was more in the nature of an outside or independent adviser to the team. That is something I wish to take up to some degree with Mr. McMahon, and something that will be taken up with other witnesses as well.

.

I now want to turn to one of the two major matters that I think recent evidence has prompted a review of, and has prompted the Tribunal to refocus on.

.

The first of these is the evaluation model.

.

As I said, AMI international, Michael Andersen for short, tendered to become the consultant advising or the consultant member of the Evaluation Group on the basis of a proposed quantitative/qualitative evaluation model. A model along these lines was presented to the Project Team and ultimately formally adopted by the team. It entailed what was essentially a three stage process: A quantitative evaluation followed by a qualitative evaluation, and followed by an interplay between the quantitative and the qualitative evaluations which involved a revisiting of the quantitative result to arrive at an ultimate ranking.

.

A detailed statement of the model was formally tabled for discussion at the 7th meeting of the Project Group on the 18th May, 1995. While it may have been somewhat unsatisfactory that the model does not appear to have been presented in advance of the meeting, so as to afford the members of the team an opportunity fully to digest it, it appears, nevertheless, that there was a significant amount of discussion concerning its contents.

.

In the minutes of the 7th meeting of the Project Group on the 18th May, it was noted that the qualitative evaluation was to provide a common sense check on the quantitative model. It was also noted that this particular part of the model, meaning, presumably, the qualitative part, would need to be clarified before further evaluation could begin. This was clearly felt to be of some importance, as it was stated in the minutes that if a later challenge were to reveal that any two persons among the evaluators proceeded with a different understanding of the process, then the entire evaluation process could be put in question. It was presumably this suggestion that the qualitative part of the model would need to be looked at that prompted the presentation of a second version of the evaluation model at the 8th meeting of the Project Group on the 9th June, 1995. This version included an additional section dealing with the interplay between the quantitative and qualitative evaluations. And we have already been through this, I think, with Ms. Nic Lochlainn, in which the two versions of the evaluation model were put in evidence, and the second version was seen to contain an additional section which set out how the two evaluations were to be combined.

.

Most of the evaluation model, both as initially presented and as ultimately adopted, dwelt on the quantitative analysis. I don’t, at this stage, know how many pages were devoted to the quantitative as opposed to the qualitative, but I think somewhere in the course of a Project Group meeting, it was stated that 80% of the model dealt with the quantitative analysis. I am now informed that in fact 17 of the 21 pages in the draft of the evaluation model as ultimately adopted, dwelt on the quantitative analysis. And three on the qualitative. And one on the interplay between the quantitative and the qualitative.

.

The evaluation model as ultimately adopted also entailed the application of weightings to the various evaluation criteria. I propose to deal with the question of weightings separately.

.

As I said, the model as adopted does not appear to have been followed. While there were a number of small and perhaps insignificant deviations, the major deviation from the model appears to consist in the abandonment of the results of the quantitative evaluation. The failure to conduct a qualitative analysis for the purpose of reviewing or reforming the quantitative evaluation was also something that appears to have been abandoned. While the precise nature of the interplay between the quantitative and the qualitative evaluation is not absolutely clear from the model, it seems that ultimately the evaluation process entirely jettisoned the quantitative report. As we know, the quantitative report did not, as had been promised, appear as a memorandum or an annex in the final report, and the results of the quantitative evaluation were not included either in the body of the final report or in any annex attached to the final report.

.

From the evaluation model, it seems reasonable to conclude that the evaluations were intended to be complementary; in crude terms one making up for the shortcomings of the other. The quantitative evaluation highlighted the features of an analysis based exclusively on concrete or measurable data, and it hardly needed to be stated that a qualitative analysis, on the other hand, could and should take account of a far wider range of evaluation indicators, including many which could not be measured in concrete terms or could not be fully or adequately measured in such terms.

.

In any case, the process seems to have started out in accordance with what was envisaged in the evaluation model, in that an initial quantitative report was presented to the Project Group on the 4th September, 1995, at the Group’s 9th meeting. While in presenting the report, Mr. Andersen acknowledged certain shortcomings in the results generated, it could not be suggested, having regard to the nature of a quantitative evaluation, that the shortcomings were anything other than characteristic of that type of evaluation.

.

While in the final report it is stated that the quantitative evaluation withered away, the fact remains that no suggestion to that effect was made at this meeting of the 4th September, at which the first quantitative evaluation report was presented, and in fact, on the contrary, while it was noted that the consensus was that the quantitative analysis was not sufficient on its own, the minutes also recorded that the quantitative analysis would be returned to after both the presentations and the qualitative assessment, that in other words, the process would be conducted in accordance with the agreed evaluation model. At that meeting the future framework of the project was mapped out on the basis that the qualitative analysis would proceed, and the date of the 3rd October was fixed for the delivery of what was described as a draft qualitative report, and while this may ultimately be clarified, it seems that it does not appear that at this time a first draft of the final report, the final overall report was intended, but rather a qualitative report which would complement the already generated quantitative report, and that from a consideration of those two reports, the process would proceed to the adoption of a draft or the generation of a draft of the final report.

.

At that stage the quantitative report, I think which has already been referred to in evidence, ranked the top three applicants as follows: A3, A6, A5, i.e. Persona, Eurofone and Digifone, in that order.

.

At the 11th meeting of the Project Group, on the 14th September, 1995, it was minuted that the presentations, which you will recall, took place in the days just before the 14th, served to consolidate the initial views on the applications arising from the quantitative assessment. Now, again, no reference was made at that meeting to the abandonment of or the withering away of the quantitative analysis, nor was there any suggestion that a new approach would be adopted to the evaluation of the applications.

.

I should say that while Mr. Andersen may have noted certain shortcomings in the initial quantitative evaluation at the time that it was first presented to the Project Group, in fact that evaluation must be regarded as having been, to some extent, incomplete. And this is because prior to presenting the evaluation report, and again in accordance with the evaluation model, Mr. Andersen had written to the applicants with what I think he called, or what may have been called “applicant-specific queries,” the responses to which were to be integrated into the quantitative evaluation. It would appear that the answers or the responses to these applicant-specific queries had not been received by the time the first or initial quantitative report had been prepared. Whether this information, that is to say the information contained in the responses to the applicant-specific questions, were collated and integrated into another draft or version of the quantitative report is not clear from the documentation made available by the Department.

.

A second quantitative report was generated, but this does not appear to have been formally presented to a full meeting of the Project Group. Although from the date of the report, that is to say the 20th September, 1995, one assumes that it was available for the meeting held on that date in Copenhagen between, as I recall, Mr. Brennan, Mr. Towey, Ms. Nic Lochlainn and Mr. Riordan, the Department side, and Mr. Andersen and some of his colleagues on the other side. This second version of the quantitative report ranked the applicants as follows: A3, A6, A5, and A1, in that order. In other words, Persona, Eurofone, Digifone, and Irish Mobicall. Judging from a memorandum from Mr. Andersen to Mr. Brennan and to Mr. Towey, of the 21st September, there must have been some discussion of the quantitative report at their meeting in Copenhagen, in that following the meeting, one of the questions posed in the memorandum from Mr. Andersen was as follows:

.

“How do we integrate the quantitative evaluation in the report?”

.

Mr. Andersen in his memorandum indicated that he preferred to leave the question unanswered until after, as he put it, “We have the final result.”

.

We know that this second draft of the quantitative evaluation report does not appear to have been formally tabled for discussion, as I said, at a project meeting. However, the formal minute of the 12th meeting of the Project Group on the 9th October, notes that the meeting agreed that a second draft of the full evaluation report should incorporate an elaboration of the reasons as to why the quantitative analysis could not be presented as an output of the evaluation process. I think from this, it’s reasonable to assume that there must have been some discussion of the second version of the quantitative report at that meeting, but no attention appears to have been given to the detail of the report or to the results.

.

We do know from a lengthy handwritten verbatim record of this meeting kept by Ms. Margaret O’Keeffe, that there was some discussion on the quantitative evaluation at the meeting, at least some discussion in principle. From this note it would appear that at this stage there was considerable confusion concerning the course of the evaluation, the place of the quantitative evaluation in that overall evaluation, and the extent to which the results of the quantitative evaluation should form any part of the process or should even be alluded to in the report. The report which had then been produced, and which in fact purported not merely to be a qualitative report, but in fact a full draft evaluation report, makes a passing reference to the issue in the introductory paragraphs, which contains the following statement describing the overall evaluation: — I am quoting from the first draft of the report.

.

“The evaluation comprises both a quantitative and a qualitative evaluation, and it was decided prior to the closing date that the qualitative evaluation should be the nucleus of the evaluation.”

.

Now, up to this time there had been no meeting of the Project Group at which, to judge from the documentation available to the Tribunal, it was recorded that the qualitative evaluation should be the nucleus of the evaluation. At this meeting of the 9th, it appears to have been suggested that the quantitative should not be performed separately, but taken into account in the main report. There seems to have been a debate of some sorts on the issue, and Ms. O’Keeffe records contributions, which I think can be divided into one side or the other side of that debate.

.

On one side of the debate it would appear to have been suggested by one contributor that the quantitative analysis should have been included, and I am quoting, “Up front.” Another contributor suggested that the team, again I am quoting, “Would like to stick to the evaluation model.” Another that the “Evaluation model, 80% deals with the quantitative evaluation.”

.

On the other side of the debate it appears to have been suggested that the quantitative analysis was “Too simplistic” to give results. Another contributor, that it was “Unfair and impossible.” Another contributor, that the “Figures were impossible to compare.” Another contributor, “That the results were not reliable.” Another contributor: that 50% of the weighting was lost due to scoring that could not be used and that the quantitative analysis had been undermined, that because of uncertainty, it could not be trusted, that it had become “less and less,” to quote another contributor. Specifically it was suggested that the approach taken needed to be explained in the part of the evaluation report dealing with the methodology, and that the wording of this would be important.

.

The October 3rd version of the evaluation report, which the Project Team then had, did not, as it then stood, contain an annex on the methodology actually followed. This annex, I think called Annex 2, did not appear until the version of the 18th October, and it also appeared in the version of the 25th October. I have described it as Annex 2, it’s perhaps more accurately described as Appendix 2, and it’s entitled “The methodology applied.” And it purports to describe the methodology and evaluation techniques actually applied in order to arrive at the results of the evaluation.

.

In Appendix 2, page 5, it was stated that the evaluators decided that all the results of the evaluation should be presented in one comprehensive report, such that the results of the evaluation, both quantitative and qualitative techniques, should be presented in an integrated fashion. It was also stated that no changes were made in the evaluation model, that it was decided that the qualitative evaluation should be the decisive and prioritised part of the evaluation. The Tribunal could find no document in which any such decision was articulated, nor indeed does there appear to have been any discussion which might have supported a decision to proceed in this way inasmuch as the discussion which I have just alluded to occurred on the 9th October after an initial draft report had already been presented, and therefore any such discussion could not have been the basis for the decision referred to in the 18th October or the 25th October versions of the final evaluation report. Because the appendix in question purports to suggest that the decision informed every version of the evaluation report, including that completed on the 3rd October, and in fact presented for the first time at that meeting on the 9th.

.

In Appendix 2 of the final report, it is stated that it became clear during the evaluation that a number of indicators in the quantitative evaluation were either impossible or difficult to score, and as examples a number were given, including international roaming, blocking and drop-out rates, tariffs and the licence fee payment. The appendix goes on to state that having realised this, the evaluators decided that the foundation for a separate quantitative evaluation had withered away. The evaluation report relies on page 1 and pages 10 and 11 of the evaluation model in support of this basis for abandoning the quantitative evaluation.

.

Now, I think it’s worth, at this stage, looking at those pages. They are contained in Book 54, Divider 2. What Appendix 2 to the final report says — Appendix 2 contains a description of the methodology applied, and says at page 5 as follows: “Furthermore, it became clear during the evaluation that a number of indicators in the quantitative evaluation were either impossible/difficult to score, e.g. the following:” And there are four listed.

.

Firstly it states: “It was impossible to score the international roaming indicator due to lack of adequate information on the number of roaming agreements.”

.

Secondly, “Having requested more comparable information concerning blocking and drop-out rates, it turned out by means of a supplementary analysis (c.f. Appendix 5) that the information provided was incomparable by nature and too heavily influenced by arbitrary assumption, imponderables and optimistic versus pessimistic approaches.

.

Thirdly, “Concerning tariffs, it turned out that two applicants, namely A4 and A6, have provided wrong information, and furthermore, that A1, A6 and partly A5 have been compared with the rest on an incomparable basis, as A2, A3, and A4 all suggest metering and billing principles which do indeed increase the actual bill, the customers have to pay for the specified amount of traffic. For this reason, it would be unfair to the applicants to award marks to only one single indicator, the OECD-like basket without taking all the other tariff aspects into consideration.

.

Fourthly, “The licence fee payment did not discriminate among the applicants at all.”

.

“Having realised this, the evaluator decided that the foundation for a separate quantitative evaluation had withered away. As the memorandum on the evaluation had not been changed, it was checked (page 1, indents 4 and 5) and (pages 10 and 11, indents 5, 6, 7 and 8) that this was also consistent with the memorandum.”

.

And I think what that’s suggests is this: If you check the evaluation model you will find that there is a basis for the approach adopted by Mr. Andersen and for the decision that the foundation for a separate quantitative evaluation had withered away.

.

Now, page 1 of the evaluation model, indents 4 and 5 is probably inaccurate. I suspect that what was referred to there was page 2, indents 4 and 5. There are no indented paragraphs on page 1.

.

I should also say in passing that the suggestion that the four nominated indicators which were giving trouble were examples of a larger number of indicators may not be correct, in that reference will be made later to another copy of this report where one of the evaluators substituted “i.e.” for “e.g.”, but in any case, that’s something we can take up with the witnesses.

.

Indent 4 on page 2 of the evaluation model is part of a section of the evaluation model headed, “Procedure for the Quantitative Evaluation Process.” And it describes the steps for the quantitative evaluation of the eligible applications.

.

It says:

  1. “A set of dimensions and indicators has been selected for the quantitative evaluation process.

  2. “All the selected indicators will be assigned a weighting factor.

  3. “The score for each indicator will be a value between 5 and 1 – (both included) – with 5 being the best score. All scores should be rounded to the nearest integer.

  4. “Uncertainties regarding the scoring of points may be dealt with in the qualitative evaluation.

  5. “The result of the quantitative evaluation should be considered with due respect to the significance of differences in the total sum of the points assigned.

  6. “A memorandum comprising the salient issues of the quantitative evaluation will be annexed to the evaluation report.”

.

Now, it’s indents 4 and 5 that the evaluation report suggests Mr. Andersen and the evaluators relied on in abandoning the quantitative evaluation. It’s hard to see how either subparagraphs 4 and 5 of page 2 of the evaluation report contain any support for this, since they clearly seem to indicate that uncertainties were likely to arise in the course of a quantitative evaluation, but that the qualitative evaluation was there to pick up the slack and to address them, as it were.

.

It goes on to refer to pages 10 and 11, indents 5, 6, 7 and 8. Now, I think that I should say, the numbering here is a reference to the evaluation model as incorporated in the final evaluation report. I just realise that in the book to which I have referred, Book 54, what you have is the evaluation model as originally presented, and the spacing and the font size is completely different, so the page numbering is not an accurate guide. And I think that the correct page numbering in Book 54 is page 18. You’ll find the same thing, I am sure, if you look at the version of the evaluation model contained in Appendix 3 to the final evaluation report.

.

This section of the evaluation model refers to the procedure for the qualitative evaluation process, and says that, “Despite the ‘hard’ data of the quantitative evaluation, it is necessary to include the broader holistic view of the qualitative analysis. Other aspects, such as risk and the effect on the Irish economy, may also be included in the qualitative evaluation, which allow for a critical discussion of the realism behind the figures from the quantitative analysis.

.

“The following describes some of the major steps in the qualitative evaluation process:

  1. “The eligible applications are read and analysed by the evaluators.

  2. “The eligible applications are evaluated by way of discussions and analyses.

  3. “When deemed adequate and necessary, in-depth supplementary analyses will be carried out.

  4. “Initially the marks will be given dimension-by-dimension. Afterwards, marks will be given aspect-by-aspect (subtotals) and finally to the entire applications (grand total).

  5. “When the dimensions are being assessed, the evaluators should, as far as possible, use the same indicators as used during the quantitative evaluation. Supplementary indicators may be defined, however, if the existing indicators are not sufficiently representative for the dimensions to be evaluated.”

    This is one of the indents relied on by Mr. Andersen and the evaluators as justifying the abandonment of the quantitative evaluation and the results of that evaluation, and again I don’t see how that proposition contains any support for what’s being contended for by Mr. Andersen and the evaluators. But maybe some witness can enlighten us.

  6. “During the qualitative evaluation, the evaluators should take the results from the quantitative evaluation into account, as a starting point, and make the operationalisations of the dimensions in order to make fair comparisons between the applications.

  7. “If major uncertainties arise (e.g. in accordance with step 4 of the quantitative evaluation or due to incomparable information) supplementary analyses might be carried out by Andersen Management International A/S in order to solve the matter.

  8. “The results of the qualitative evaluation will be contained in the main body of the draft evaluation report. The results of the supplementary analyses will be annexed to the draft report.

  9. “The draft report is to be presented and discussed among the ‘Essential persons’ (identified by the Department). On this basis, Andersen Management will be asked to propose a final report.”

.

Now, I think, as I was saying, the referenced pages of the evaluation model do not seem to me, in any case, to support what is contended for by Mr. Andersen. I think that if you look at the reasons advanced for abandoning the quantitative evaluation, it is hard to see why any reliance could be put on the reference to the licence fee, since this became a completely neutral element in the process. The basis upon which international roaming, blocking and drop-out rates and tariffs were regarded as being difficult to score is hard to understand, in view of the fact that the evaluation model identified the inability of a quantitative evaluation technique to cope with every aspect of an evaluation, and specifically identified as deficiencies or limitations of a quantitative technique those very things relied on as or appearing to have been relied on as warranting the abandonment of that technique.

.

It seems that the evaluation model, which was originally adopted in quite formal terms, was replaced with a new evaluation technique, the precise nature of which is difficult to describe or difficult to understand, at least there is no adequate description of it in the report, and no, certainly no workable description of it in the report and no adequate or workable description of it in any of the minutes of the meetings of the Project Group.

.

Mr. Andersen, in Appendix 2, to which I have just referred, described it as involving a holistic evaluation in which quantifiable and non-quantifiable indicators associated with the selection criteria were combined. Where quantifiable information was included as part of this evaluation technique, it was extracted from what Mr. Andersen calls the “hard data” submitted for the purposes of the original quantitative evaluation. It would now appear that the initial proposal to conduct a quantitative analysis, and subsequently a qualitative analysis, and to use one as a common sense check on the other, as it was put, was abandoned.

.

It appears that not only was the quantitative evaluation as originally conceived, abandoned, but the qualitative evaluation as originally conceived was also abandoned, and in place of what was originally envisaged, the Project Group substituted a form of combined evaluation, which as I have said, the precise nature of which is extremely difficult to understand, either, as I have said, from the underlying papers or from the report itself.

.

While, as I have said, the report states that the quantitative evaluation withered away, the fact remains that no suggestion to that effect was made at the meeting at which the first quantitative evaluation report was presented, and in fact, on the contrary, while it was noted that the consensus was that the quantitative analysis was not sufficient on its own, it was also recorded that it would be returned to after both the presentations and the qualitative analysis.

.

I referred to some of the issues raised at the meeting of the Project Group on the 9th October, and I have mentioned Ms. O’Keeffe’s note in which some of these issues appear to have been debated. It is not clear to what extent the issues raised at that meeting were ever satisfactorily resolved, at least it’s not clear from minutes of subsequent meetings, and it’s not clear from the report or from any other documentation.

.

From Mr. McMahon’s handwritten note on his copy of the minute of the meeting of the 9th October, which we know he made on the 1st November, it would appear that he and Mr. O’Callaghan expected, following the meeting of the 9th October, that the qualitative assessment would continue from that time. This appears to have been based on a perception shared, it would appear, as between at least Mr. McMahon and Mr. O’Callaghan, that while the process up to that date had identified A3 and A5 as front runners, it had not distinguished or separated their applications to the point where a ranking could confidently be proposed.

.

It has to be borne in mind that it would appear that not every member of the Project Team had an opportunity in advance of the meeting of the 9th October to digest the contents of the report.

.

The next version of the final evaluation report, dated 18th October, appears to have been made available on or about the 20th October. Mr. O’Callaghan has given evidence concerning his marginal notes in the version of the report that was made available to him. And it would be recalled that his attention was drawn to a marginal note he made at page 12 of Appendix 3 to the report. That refers to a passage in the evaluation model which is as follows:

.

“The results of both the quantitative and the qualitative evaluation will be contained in the draft report with appendices to be prepared by the Andersen team.”

.

With reference to the quantitative evaluation, Mr. O’Callaghan asks, “Is it here?” From other marginal notes he made on that version of the report, some of which I’ll be returning to later, it’s clear that Mr. O’Callaghan, and this is also clear from his evidence, on a first review of the report, made a number of highly perceptive comments. And it seems fair to suggest that up to this time, he was under the impression that the report would include both the quantitative and qualitative evaluation, and that he, at least, and as far as we can judge I think from the evidence also, Mr. McMahon, neither he nor Mr. McMahon were part of any so-called decision, as suggested by Mr. Andersen, that the quantitative evaluation had been abandoned or that it had withered away.

.

Mr. O’Callaghan’s other marginal notes evinced dissatisfaction on his part with a number of aspects of the report. This is also reflected in a note entitled, “Views of the Regulatory Division”, dated 23rd October, 1995. It would appear from, I think, evidence given by Mr. McMahon the last time he was in the witness-stand, that this note must have been prepared in advance of the meeting of the Project Group on the 23rd October. As Mr. McMahon did not have a copy of the report, then presumably the note must have been based on Mr. O’Callaghan’s initial view of the contents of the report as relayed by Mr. O’Callaghan to Mr. McMahon. Although, presumably, based on a review of the contents of the report, it was not focused either exclusively or at all on the report as opposed to the result of the process. It records an agreement, again presumably based on whatever review had taken place between them of the report, that A3 and A5 were front runners; that they were very close, but that by reference to the report, neither Mr. O’Callaghan or Mr. McMahon were able to say which was, in fact, ahead, and it suggested that the qualitative assessment of the top two applicants be revisited. And this view was expressed as one based not only on the logic of the report, but on the basis of the reading of the applications and the hearing of the presentations by the applicants.

.

Mr. McMahon’s notes of the meeting of the 23rd October suggest that even at this stage, just two days before the decision was announced, the members of the Evaluation Team were still at odds about a number of aspects of the evaluation methodology, and in particular, about the quantitative versus the qualitative evaluation and about the way in which the qualitative evaluation, if such it was, was ultimately conducted in order to arrive at the result contained in or purported to be contained in the report dated 18th October.

.

Now, there are a number of aspects of that meeting to which I intend returning, but before doing so I want to mention the other major issue which I think has come into clearer focus in the past few weeks, namely the issue of the weightings. The foundation document of this whole evaluation process was the request for proposals or the RFP, as it has been called. This was effectively the tender document. The criteria about which the competition was to be judged were referred to in a number of different sections of this document, and while not forgetting that paragraphs 3 and 10 are of considerable importance, and that the whole of paragraph 19 ought to be considered and not just the eight listed criteria, much of the work of Mr. Andersen was focused on those criteria and the order in which they were prioritised. As the paragraph itself was referred to in the Government decision underpinning the contest, it’s not surprising that these have come to be called the evaluation criteria.

.

The eight listed items in paragraph 19 describe in narrative form the headline criteria by which the various applications were to be judged subject to the financial and the technical capability of the applicants in proposing an evaluation model. Mr. Andersen devised an approach whereby each of these evaluation criteria was recharacterised under a single headline title, or alternatively broken down in a number of constituent headline items. These were described as the Dimensions. Every dimension was linked to one of the evaluation criteria. In this way the criterion described in paragraph 19 as the “approach to tariffing proposed by the applicant” which must be competitive was described simply as the dimension “tariffs” in the evaluation model. The evaluation criterion, credibility of business plan and applicant’s approach to market development was broken down into three itemised dimensions; namely, market development, experience of the applicant, and financial key figures.

.

The evaluation model proposed by Mr. Andersen set out in considerable detail the way in which each of the dimensions was to be evaluated. In some cases, the dimensions were themselves recharacterised or redefined in terms of a list of indicators.

.

On the table on the overhead projector, you have on the left-hand column, a list of the evaluation criteria in the terms in which they were set out in the RFP. In the next column, you have what Mr. Andersen calls the dimensions linked to each evaluation criteria. And as I have said, in the case of the first evaluation criterion there are three dimensions; market development, experience of the applicant, and financial key figures. These again, are recharacterised in the next column as the indicators: forecasted demand, number of network occurrences in the mobile field and solvency and IRR. So the whole notion of the credibility of an applicant’s business plan and his approach to market development was divided into three main areas: Market development, experience of the applicant, and financial key figures.

.

Market development was envisaged to be measured by seeking figures on an applicant’s forecasted demand. The experience of the applicant would be measured by the number of network occurrences in the mobile field on the part of an applicant consortium, and the financial key figures would be measured by quantifying or by processing information on solvency and an applicant’s projected internal rate of return.

.

The model proposed at the meeting of the 18th May, 1995, contained not just a list of the evaluation criteria of the dimensions linked to each evaluation criterion and of the indicators for each of the dimensions, but also set out the weightings proposed for each of the criteria. In addition, it set out the weights to be applied to the indicators or the dimensions into which these listed criteria from paragraph 19 were broken down.

.

This is what you now have on the overhead projector is page 16 of the first version of the evaluation model proposed at the meeting of the 18th May, 1995. Listed on the left-hand side are each of the indicators into which the dimensions identified by Mr. Andersen were broken down. In all, there were 13 indicators. And on the right-hand, far right-hand column, you have the weights which he envisaged would be applied to the scores achieved by the applicants with respect to each of these indicators. This set of scores respected the ranking of the paragraph 19 criteria as approved by the Government in the decision underpinning the process.

.

But what I want to draw attention to at this stage was that this was merely Mr. Andersen’s proposal at this point. This was not the weightings that were actually agreed or formally adopted by the Evaluation Team. If you look at the third-last or the eleventh item, which is the “Up front licence payment,” you’ll see that that is given a weighting of 10. We know that ultimately that weighting was not adopted. The same applies for a number of other indicators.

.

We have to assume from our documentation, that there was some discussion of these weightings, and that as a result of those discussions, the Evaluation Team arrived at an agreement concerning a different set of weightings, and that different set of weightings was ultimately formally adopted as the weightings to be applied in the conduct of the evaluation process.

.

Now, the agreed weightings were: 30 — and I am going to call them out in the order — in an order which corresponds to the order of the criteria set out in the paragraph 19 of the RFP. And they were as follows: 30, 20, 15, 14, 7, 6, 5 and 3.

.

I am trying to find Ms. Nic Lochlainn’s note recording the agreement of the Project Team to these meetings. It’s contained at Leaf 1(A) of Book 54. And in a note to file Ms. Nic Lochlainn records that it was agreed at the meeting of the 18th May, 1995, and obviously we can read in that the weightings should be as I have indicated: 30, 20, 15, 14, 7, 6, 5 and 3.

.

What is not evident from Ms. Nic Lochlainn’s note is how the total weightings for each criterion were broken down with respect to the dimensions linked to each criterion.

.

The next meeting of the Project Group took place on the 9th June, 1995. This was the 8th meeting of the Project Group. The second draft of the evaluation model appears to have been formally tabled for that meeting. This draft was dated 8th June. It seems to have taken on board a number of suggestions made at the review of the first draft of the meeting of the 8th May. And I have already alluded to this. The minutes of the meeting of the 9th June record that the second draft was approved as presented with the correction of one typographical error. The minutes went on to state, however, that further comments, if any, were to be forwarded to Ms. Maev Nic Lochlainn within a few days of that meeting.

.

The Tribunal now understands from another document which the Tribunal has only recently retrieved from documents made available to it by the Department that in a memorandum of the 21st July, addressed to Mr. Brennan, Ms. Nic Lochlainn noted that the 8th meeting of the Project Group approved the second draft of the evaluation model subject to further comments in writing to herself. She records in her memorandum that no written submission had been received, and that it could therefore be taken that the evaluation model had been approved. She also indicated that a single copy of the evaluation model with Fintan Towey’s name was being held securely in the division, and she sought Mr. Brennan’s approval of this as the basis on which to proceed with the evaluation of tenders in the GSM process.

.

Her memorandum sets out the paragraph 19 criteria with the evaluation weightings that I have already mentioned — Book 52, Leaf Number 26. And it has the weightings I have already described from 30 down to 3. And these total — the total is 100, or 1, if you like, on that unitary model.

.

Now, I’ll come back to that document in a moment.

.

If we could look at the second draft of the evaluation model for a moment, or we may even have a table. If you look at the second draft of the evaluation model presented and apparently formally approved, the total weights for the paragraph 19 evaluation criteria, while approximating to, are not precisely the same as those set out by Ms. Nic Lochlainn.

.

Now, I have arranged for a table to be prepared which shows these weights reconfigured in a form which corresponds with the RFP to make it easier to understand the connection between the weighting and the criteria to which it applies.

.

In the left-hand column you see the criteria as set out in the RFP. Then you have the dimensions proposed by Mr. Andersen and adopted by the Project Team. Then you have the indicators again proposed by Mr. Andersen and adopted by the Project Team. Then you have the weights. Next to that I have total — I have arranged for the weights to be totalled.

.

You’ll see that the weights are 32.5 for the first criterion, and not 30. 20 for the next criterion. 15 for the next. 14. 7.5 for the next criterion. Then 6, 5 and 3. In total, these weightings add up to 103, although in the evaluation model itself, which we had on the overhead projector a moment ago, they are stated to add up to 100. At the same time, I think it should be borne in mind it’s only fair to point out, that apparently it was never intended that these weightings would operate otherwise than as a set of weightings totalling 100, and the calculation of scores in any particular case, when the weighting was applied, was to be renormalised on the basis that the total weightings should add up to no more than 100.

.

If you leave aside the discrepancy caused by the introduction of total weights amounting in the aggregate to 103, it’s interesting to note that the weights referable to the dimensions are significantly different between the evaluation model of the 8th June, 1995, and the formally approved evaluation model and the earlier proposed draft.

.

If you could go back for a moment to the table which shows — well, perhaps if you leave that on, yes, it may be just as easy to do it that way. What you have on the overhead projector at the moment is a table setting out the evaluation model as proposed on the 17th May. And on the right-hand column you have the weightings proposed. You also have the breakdown of the weightings to be applied to each of the indicators. Now, we know that the weightings were changed. But the weighting for the top or first criterion was not changed. That was left at 30. The breakdown proposed at that meeting was 10 for forecasted demand or market development; 10 for number of network occurrences or experience of the applicant; and 10 in total for solvency and IRR, or financial key figures.

.

To recap: We know that from Ms. Nic Lochlainn’s note to file, the total weightings proposed by Mr. Andersen were not accepted, and the Project Group arrived at their own decision as to what weightings should be adopted. And I have already drawn attention, for example, to the fact that the weighting for the licence payment was increased from 10 to 14.

.

If you go to the meeting of the 8th June, where the evaluation model proposed by Mr. Andersen was formally adopted, it contains, as I have already mentioned, a set of weights which adds up to 103. For a moment, if we ignore that fact and look at the first item on the list of weightings, the one applicable to credibility of business plan and applicant’s approach to market development, you’ll see that the breakdown envisages that market development would receive a weighting of 7.5; that experience of the applicant would receive a weighting of 10; and that financial key figures would receive a weighting of 15. Again, if we ignore the fact that the weights add up to 103, you will see that within that criterion, the dimensions are related to one another on the basis that market development is given a weighting of 7.5 against 15 for financial key figures. So it has half the weighting of financial key figures.

.

We know that following the intervention of the EU, it became necessary to rebalance the weightings, and this was done by a formal procedure before the end of July, whereby the weighting of 14 applied to the licence payment was reduced to 11, and the three outstanding points were allied to the weighting for tariffs, bringing that up to 18.

.

Now, I should also say that in the course of evidence last Friday, I think it was suggested that the adoption of these weightings envisaged their application both to the quantitative and the qualitative evaluation. However, if we could go back to Ms. Nic Lochlainn’s note for a moment, it would appear from her note to Mr. Brennan of the 21st July, that that can not be correct, and her memorandum makes it clear that the weightings were approved for the quantitative evaluation only. There is no subsequent formal adoption, as far as the Tribunal can see, of the same, or indeed as will appear, of any weightings for the qualitative evaluation, although we know from the report that weightings were applied to the results of the qualitative evaluation.

.

I haven’t a lot to go, Sir, but I do have a number of documents to refer to and I think it might be easier if I could —

.

CHAIRMAN: Well, to some extent I’d envisaged, in ease of Mr. McMahon, that perhaps the majority of the opening remarks —

.

MR. HEALY: That is the majority, yes.

.

CHAIRMAN: And there are some latter portions of it that don’t particularly directly relate to Mr. McMahon’s consideration?

.

MR. HEALY: There is some other documentation, a further book is being prepared, and it might be easier, I think, before passing on to that documentation, if that — or before passing on to that part of what I have to say, if those documents were available to people, otherwise it’s going to slow down what I have to say, I think, significantly, if I am the only person with the relevant documents.

.

CHAIRMAN: All right. Well, then we’ll take it up at two o’clock then, Mr. Healy.

.

MR. McGONIGAL: It might be of some assistance, Chairman, I notice that this speech appears to be written. It might be of some assistance if copies of it could be given to the teams, and we could read the balance of it over lunch together with the documentation.

.

CHAIRMAN: Well, it has been prepared very much at the last minute, Mr. McGonigal, in the course of last night and early this morning. And there have been revisions of it. I am certainly anxious that you not be taken short, but I mean, there are portions that have been abandoned. Insofar as there may be some useful draft that may substantially put you on notice of what remains, I’d certainly encourage the notion that that may be made available. Two o’clock.

.

THE TRIBUNAL THEN ADJOURNED FOR LUNCH

.

THE TRIBUNAL RESUMED AS FOLLOWS AFTER LUNCH:

.

CHAIRMAN: Mr. McGonigal, as to the written statement, I know one of the legal team has worked through lunch to proof read and correct the draft. It’s not finished yet, but it will be made available —

.

MR. McGONIGAL: I didn’t expect them to do that. I am grateful to them for doing that. I thought that if there was something else written it could have been given to us, and because I’ll get the transcript later, in the heel of the hunt. While I thank them for doing that —

.

CHAIRMAN: Thanks, Mr. Healy.

.

MR. HEALY: I think that before lunch I stated that from the documents to which the Tribunal has now drawn attention, it would appear that weightings had been agreed, but that these weightings had been agreed for the quantitative evaluation. This is clear from the note to file of Ms. Nic Lochlainn, and also clear from the additional document to which I drew attention this morning, the note of the 21st July.

.

As I indicated, the breakdown of the weightings; in other words, the sub-weightings which were to be applied to the various dimensions also appears to have been agreed, and what was agreed, as far as can be seen from the documentation, is that the breakdown of the weightings should be as per the breakdown contained in the second version of the evaluation model subject to two qualifications: The qualification is, as we know, there had to be a rebalancing of the weightings for the purposes of complying with EU requirements. And secondly, there was — there had to be a renormalisation due to the fact that a total of 103 had been used.

.

What the Tribunal has been trying to do in looking at both the approach to the quantitative and qualitative evaluations, and also in relation to its refocusing on the weightings, is to see whether there was any consistency in the approach of the Evaluation Team to the question of weightings over the period in which the evaluation was conducted, and as I say in relation to the whole question of the distinction between quantitative and qualitative evaluations, there appears to have been considerable confusion right up until the 23rd October of 1995.

.

Now, to return to the question of the weightings. I have already mentioned this morning that the initial draft report of the quantitative evaluation was presented to the 9th meeting of the Project Group on the 4th September. As I said also, I think this would appear to be the only time that any version of the quantitative evaluation was formally presented to a meeting. In any case, this version of the initial draft quantitative report was based on weightings approved at the meeting of the 9th June, 1995. It appears that not only were the weightings applied as per those formally adopted at that meeting, but the breakdown of the weightings applied to each of the dimensions was in accordance with those apparently formally adopted at that meeting. Therefore, as I said, with reference to the breakdown of the weightings this morning, the weighting applied to key financial figures was 15%, while that applied to market development was 7.5%. In other words, half of the weighting applied to key financial figures.

.

Of course, as we know from other evidence, the weightings for tariffs and licence payment respectively, did not take account of the change in weightings following the EU intervention.

.

Now, if you look at book, I think it’s 54, Leaf 6, you will see the relevant portions or some of the relevant portions of the draft quantitative report. And the second page in that leaf sets out the total weighted score. You see that, as I mentioned this morning, A5 were in first position, I think A6 in second position, A5 in third position, and A1 in fourth position. But if you look at the weights, you’ll see that the weightings for tariffs and licence payment did not take account of the change in the weightings following the rebalancing, which as I said, was necessitated by the EU intervention.

.

Now, if we could turn for a moment to the minute of the 9th meeting of the Project Group on the 4th September, which I think is in Book 42. Divider 95, Book 42. If you look at the front page of the minute, you’ll see the heading, “Quantitative Evaluation,” and a reference underneath that to the presentation of the initial draft report. Then you see that the quantitative evaluation had highlighted some incomparable elements, and they are identified as “i.e.”, and they are followed by four elements. And you will recall that this morning I alluded to a note made by Mr. Riordan to the effect that in Mr. Andersen’s evaluation methodology contained in Appendix 2 of the final report, the reference to these as examples of incomparable elements should have read to perhaps these as the only incomparable elements, but in any case, we can take that up with the witnesses.

.

If you go to the next page of this minute, you’ll see that it says: “The meeting discussed each dimension and the scoring document in turn. The consensus was that the quantitative analysis was not sufficient on its own and that it would be returned to after both the presentations and the qualitative assessment.”

.

We have already alluded to this in the context of the uncertainty or inconsistency concerning the approach to quantitative and qualitative assessments. It goes on: “It was also agreed that the figures used by the applicant could not be taken at face value and needed to be scrutinised. Responsibility for such scrutiny has not yet been decided.

.

“The need to reflect a change in the weighting for the licence fee was highlighted. AMI committed to correct the model in this respect.”

.

What is significant is that while this issue was clearly taken up at the meeting and drawn to the attention of the meeting and drawn to the attention of Mr. Andersen, from whom a commitment to correct it was obtained, no issue appears to have been taken concerning the other weightings, and specifically no issue was taken concerning the grouping of the weightings on the dimensions, and in particular, in the first dimension: Market development, experience of the applicant and key financial figures.

.

By the time that the first full Evaluation Report, as it ultimately became, of the 3rd October was produced, it would appear, as I have said, that Mr. Andersen had abandoned the quantitative evaluation as initially conceived, and indeed, also appears to have abandoned the qualitative evaluation as originally conceived. Whether the weightings or whether any weightings, I suppose, were to be applied to the type of evaluation which Mr. Andersen had now embarked on, and if so, what weightings were to be applied and how they were to be applied to this newly devised form of evaluation is far from clear, and seems to have formed considerable difficulty for the evaluators between the reception of this report on the 3rd October and the date of the announcement of the result to the Minister.

.

Although the concluding pages of the final version of the Evaluation Report attach considerable significance to the weighting of the evaluation criteria, and indeed considerable significance appears to have been attached to them in subsequent public statements by the Minister, there is no reference to those weightings in the substantive portion of either the final version, the version of the 18th October, or the version of the 3rd October of the Evaluation Report. The substantive portion of the report is contained in a chapter of the report which is variously numbered, but which is always headed, “The Comparative Evaluation of the Applications.” It has the same title in each of the versions of the report, whether the 3rd, 18th, or the 25th. And it’s not possible, from a reading of that section of any of the reports, to apply the weightings in such a way as to arrive confidently, or indeed at all, at the conclusion contained in the report, and there is no information in the report from which to apply the weightings, or any weightings, if weightings were in fact agreed.

.

Now, come back for a moment to the first version of the report again, dated 3rd October. This version of the report set out in Appendix 3 the evaluation model as formally adopted by the Project Group at the meeting of the 9th June. This version of the report is contained in various places, but you can get it most easily, I think, in Book 46. It’s also available in Book 42 at Leaf 117. It’s in Book 46 at Leaf 34. In fact, the annexes are at Leaf 35. And page 11 of the evaluation model as set out in that annex to the first version of the report, sets out the various indicators linked to each dimension, and also sets out the weights attached to each indicator. The weightings in this version of the report, which you can just about make out on the overhead projector, and in this annex, are set out in precisely the same way as they are set out in the evaluation model formally adopted by the Project Group in which, as you can see, a weighting of 7.5 is applied to market development, adding the first two weightings for market penetration score 1, and market penetration score 2; and 15 to financial key figures, which are the two bottom items, solvency and IRR; and 10 to the experience of the applicant, which is identified by reference to the indicator: Number of network occurrences in the mobile field.

.

Now, I will come back to this document in a moment. It would appear that around the time that this report, this initial report was made available to the Evaluation Team, or at least to some members of the Evaluation Team, perhaps not all of them, queries were raised concerning the weightings. Some of these queries were brought to the attention of Mr. Andersen by Ms. Maev Nic Lochlainn by a fax message, to which reference has already been made in the course of these proceedings, and which was mentioned in the evidence given by Ms. Nic Lochlainn. This is contained — this fax message is contained in Book 54, Leaf 9. This is a fax message to Mr. Andersen from Ms. Nic Lochlainn in which, as you will recall, she referred to a number of items.

.

The first one was: “Please see qualitative scoring for technical aspect as recorded by John McQuaid which follows (Annex A). This does not correspond with the technical aspect subtotal detail on page 44 of the draft evaluation report — I believe it is a typo, marketing aspect scores having been duplicated by mistake.”

.

It seems clear, if you look at the first draft of the evaluation report, and I don’t want to put it on the overhead projector, that what Ms. Nic Lochlainn is saying is absolutely correct, and we have had this in the evidence of a number of other witnesses. But it’s clear that Ms. Nic Lochlainn, presumably, had the report at that stage and was able to reach this conclusion, and indeed it seems the conclusion may have been brought to her attention by Mr. McQuaid, because she encloses Mr. McQuaid’s own note based on an extract from the evaluation model, from the final and formally adopted version of the evaluation model, in which he carried out his workings to generate the subtotal on the technical aspects, and from that it was clear that the subtotal contained in Table 15 of the first version of the final report was wrong. And I think wrong for the reasons stated by Ms. Nic Lochlainn, that there was a typographical error.

.

Ms. Nic Lochlainn then goes on to say: “Please see attached list of criteria and weighting as agreed by the Project Group prior to 4 August 1995.” And she listed this as Annex B. And this is a listing of the weightings to be applied to the headline criteria, though not, it has to be said, with the breakdown applicable to each of the indicators.

.

Then she says: “Could you please clarify how these relate to the weights as detailed on page 17/21 of the document of the 8th June, 1995, which were to be the weights underlying the quantitative evaluation?

.

“(Page 17, is also attached at Annex C.)” If we can just go to it for a moment. It shows the weightings and the breakdown in the form which we have already discussed. Again, what this makes clear is that this weighting was available to the Evaluation Team and at least the impression some members of the Evaluation Team had was that this was, in fact, the formally approved breakdown of the weightings.

.

Now, Ms. Nic Lochlainn goes on to say, I’ll have to repeat her whole sentence again. “Could you please clarify how these relate to the weights as detailed on page 17/21 of the document of the 8th June which were to be the weights underlying the quantitative evaluation? (Annex C) and to page 7 of the draft quantitative report (See section on weights at Annex D) e.g. OECD basket is weighted at 15.96%, does this correspond to 18% for competitive tariffing as agreed by the group?”

.

Alluding to another aspect of the evaluation, the aspect to which I think I drew some attention already, namely the fact that the weightings contained on this table which, while to some extent consistent with those agreed in the evaluation report, do appear to show some divergence from those set out in the evaluation model. It would appear that the reason for this may be completely and utterly technical or mathematical, in that if you look at Item 9, licence fee payment, the weighting has been reduced to 11.7%, to approximate in some degree to what was agreed following the rebalancing, or what was agreed as a rebalancing of the weightings following EU intervention. But while an amount of weighting has been removed from licence fee payment, there does not appear to have been a rebalancing of the OECD basket to bring that up to something close to the 18% that had been agreed following the EU intervention.

.

My attention is also drawn to the fact that the weighting which had been applied to international roaming, which was 6 marks, appears for the purposes of this quantitative weighting, to have been redistributed, I suppose in some renormalised fashion, amongst the other indicators, or the other criteria. In any case, what these queries raised by Ms. Nic Lochlainn on this occasion do indicate is that significant concerns were being aired concerning what the weightings were and how the appropriate weightings were to be applied to the evaluation criteria.

.

Now, there appears to be no formal written response to Ms. Nic Lochlainn’s queries. However, there may have been some response to these queries, some unrecorded response. It seems that the matter may have been alluded to at the meeting of the Project Group held on the 9th October of 1995. This is in Book 42, Leaf 121.

.

If you refer to Margaret O’Keeffe’s handwritten note of that meeting, and I’ll put it on the overhead projector in a moment, you’ll see that there is a reference on the fourth page to Annex D. As there is no Annex D, or Appendix D to the Evaluation Report under consideration, it seems reasonable to conclude that what is being referred to is Annex D, as it is so described, in Ms. Nic Lochlainn’s fax to Mr. Andersen. Under the heading “Quantitative”, on the fourth page, you’ll see, “Ranking is probably different now (Annex D).” Then there is a reference in brackets to Annex D. Perhaps suggesting that if the proper weighting had been applied to tariffs in the quantitative report of the 20th September, 1995, the ranking would be different. If you examine the quantitative report, you’ll see that it’s unlikely that it would have changed the ranking of the first ranked applicant, though it might change the ranking of A6.

.

What seems clear is that it must have been plain for all of the evaluators to see at that meeting, at least if they had access to Ms. Nic Lochlainn’s note, that the ranking, or the weighting rather, the weighting of the dimensions linked to the criterion credibility of business plan and applicant’s approach to market development, was on the basis that — was on the basis that the relative weighting as between market development and financial key figures was 2:1 in favour of financial key figures.

.

Now, if I could track for a moment to the next version of the draft evaluation report. You’ll see that in the draft of the report, dated 18th October, 1995, Appendix 3 appears to have been changed. This is Book 46, Divider 46. And the appendices are Divider 47. If you go to Appendix 3, page 10, you’ll see that what appears to have been intended, or at least on the face of it, what must have been intended was that this appendix would contain the evaluation model as formally adopted at the meeting of the 9th June. As I have already said, this was certainly what was contained in the October 3rd version of the draft evaluation report. However, by this version on the 18th October, Appendix 3 had been changed. Again, it purported to contain the draft model, but page 10 contains a set of indicators, together with the linked weights which diverges from the equivalent page on the evaluation model. And you will recall that on the equivalent page of the evaluation model, the first two items are scored 3.75, 3.75. If we go back to Appendix 3, you’ll see they have been changed to 5 and 5.

.

The weightings, therefore, are no longer as per those apparently formally adopted, but have now been changed to accord with those actually applied in the concluding sections of the Evaluation Reports, whether the first draft, the second draft or the final draft.

.

Now, to return to the meeting of the 9th October again. It would seem that at the meeting there were further queries raised concerning weightings, not just raised by Ms. Nic Lochlainn or others. Specifically, it seems, to have been noted by one contributor that Table 17 in the draft evaluation report differed from the agreed weightings. This is a reference to the third page of Ms. O’Keeffe’s note, which is at Leaf 121 of Book 42. And as you can see on the overhead projector, she records a contributor or a contribution to the effect that Table 17 differed from the agreed weighting. There will be more references to that table in a moment.

.

Table 17 is contained at page 49. I want to be careful, I may have given you the wrong number. Table 17 is contained at page 45 of the report, which is itself contained at Leaf 34 of Book 46, and also in Leaf 117 of Book 42. That table shows the weightings and the breakdown of the distribution of the weightings amongst the three dimensions of the first criterion as 10, 10, 10. And it seems reasonable to assume that the reference to Table 17 being different from the agreed weightings must be a reference to a difference between the weightings as set out on Table 17 and the weightings as set out on the draft evaluation model. Well, in fact, I shouldn’t say draft, on the agreed evaluation model.

.

Whether, in changing the weightings in the appendix to the 18th October version of the evaluation report so as to make the weightings on the evaluation model accord with those applied in Table 17 was prompted by a recognition that there had been a change in the agreed weightings, is not clear. One thing is certain, there is no formal documentation confirming that there was any change in the agreed evaluation model weightings over and above those set out in the document of the 8th June of 1995.

.

At that meeting, there were a number of other contributions concerning weightings. And other contributors wished to know whether the indicators had been weighted, and if so, whether that weighting should not have been set out in the report. And indeed, it would appear that Mr. McQuaid, who has already given evidence, suggested that without visibility of the weightings, the report looked unreasonable. You’ll find that comment on page 6 of Ms. O’Keeffe’s report or minute, handwritten minute of the meeting. You see under Mr. McQuaid’s name, there is an entry which reads, “Without visibility of weighting it looks unreasonable.”

.

There are other contributions. If you look at page 5, you’ll see that under the heading “Page 20,” there is an entry which reads, “Weighting should be given” and then a query, “Are indicators weighted?”

.

Before passing on to the next version of the report, I think this might be an appropriate time to refer to the structure of the report. As I have already mentioned, in every version of the report the main comparative work is set out in a section entitled “The Comparative Evaluation of the Applications.” This section of each of the reports generates a number of tables in which the grades achieved by each of the applicants with respect to each of the indicators linked to each of the dimensions are recorded. And as we know from the evidence of Mr. McQuaid, apparently subtotalled. There is no information in this section of any of the reports from which to ascertain the weightings, if any, applied to the indicators, nor is there any information to enable anyone reading the report to understand how the weightings, if any, were applied to the various grades and how those grades with the weightings applied were subsequently aggregated so that to generate the subtotals for the various dimensions.

.

Just for a moment, look at this portion of the first version of the report. I think it’s Chapter 4. Sorry, I am wrong, Chapter 3, or Section 3. It starts at page 10 of the report headed, “The Comparative Evaluation of the Applications.”

.

And the first item is “Marketing Aspects”. Now, Mr. Andersen, in accordance with his model, or at least in accordance with the qualitative portion of his model, grouped marketing aspects by linking, or grouped the criteria related to marketing aspects by linking market development, coverage, tariffs and international roaming. And you’ll see that on that Table 1, each of the applications is given a grade in respect of each of those dimensions. Then at the bottom there is a total or a subtotal for the overall score achieved by each applicant for the entire aspect, if you like. So that A1 gets a C for market development, a B for coverage, a C for tariffs, an A for international roaming, and that is subtotalled to give an overall B.

.

Now, if you go on to page 14, you see at Table 2, which contains a heading “Dimension Marketing,” and then a list of ten indicators linked to that dimension, and each of these is then scored with respect to the applications of each of the applicants. So that A1 gets a C for the first indicator, a B for the second, an E for the third and so on. And then there is a subtotal at the bottom of a C, and there is similar scores and similar subtotals for the other five applicants.

.

Now, if we go back for a moment to Table 1 again, you’ll see that the market development score for A1 is a C, and that itself is a subtotal of all of the scores achieved by C for the dimension marketing, by totalling the scores received by applicant A1 for each of the indicators linked to that dimension.

.

This is the main substantive work which resulted in the result of the evaluation process.

.

Section 4 of this version of the report then deals with what are called “Sensitivities, Risks and Credibility Factors.” That’s contained at page 40. And then if you turn to page 43, you’ll see the summary and concluding remarks and the recommendation. And this states that the aim of the report was to nominate and rank the three best applications on the basis of the evaluation. And it’s stated that “this had been conducted by way of four different methods,” of which the first was “the result on the basis of the evaluation of the marketing, technical, management and financial aspects (qualitative award of marks).”

.

If you go on to the next page, you’ll see the first table in this portion of the report, Table 16, and this sets out, or if you like, brings together or summarises the results to which I have already drawn attention contained in Table 1, which deals with marketing aspects. That’s Table 1 of the comparative evaluation of the applications. Underneath that you have grouped the subtotals of the — you have grouped together, rather, the scores of the applicants with respect to the technical aspects, and you have the subtotal; the same for the financial aspects, and likewise for the management aspects. This, it would appear, was intended to be the culmination and perhaps the only culmination to Mr. Andersen’s work, because if you look at Book 54, Leaf 2, you’ll see a reference to this approach in the procedure for the qualitative evaluation process.

.

Mr. Andersen, at page 18. What you just saw was the table similar to the table ultimately used by Mr. Andersen. At Item 4, Mr. Andersen says: “Initially the marks will be given dimension-by-dimension. Afterwards, marks will be given aspect-by-aspect and finally the entire application (grand total).”

.

Then if we turn over to page 20 of the evaluation model, you’ll see that what Mr. Andersen pre-figured there was the bringing together of the total score for market development, the total score for coverage, the total score for tariffs, the total score for international roaming, and the addition of those scores to generate a subtotal for marketing aspect and go on and do the same for technical aspects, financial aspects, and so on.

.

I think it noteworthy that in the report there is no reference to weightings in this table or in the narrative which accompanies it. This format in which the table containing the grouping of dimensions and the subtotalling of dimensions appears in this prominent, subject to what the evidence reveals would appear preeminent position in the version of the report of the 3rd October and also in the version of the report of the 18th October, where it appears at page 48, and again appears in the chapter headed “Summary Concluding Remarks and Recommendations.”

.

However, in the final version of the report on the 25th October, this table, which as I have suggested may contain what could be regarded as the culmination of Mr. Andersen’s work, appears to have been demoted to the body of the report and did not form part of the section dealing with the concluding remarks and the recommendation of the evaluator.

.

In the final report, at page 47, and the final report is contained at Leaf 50, Book 46, the final evaluation is described by reference to two tables only; Table 16, which sets out all of the grades achieved by the applicants with respect to each of the dimensions and the weightings, well a weighting in any case, to be applied to each of those dimensions, and a grand total. Table 17 sets out the same information translated into numeral form, with a weighting applied, and a total in numeral form at the bottom to generate a ranking.

.

If you go to page 47 of that final version of the report, you’ll see that in describing the aims of the process and the way in which this has been achieved, the reference to arriving at that result in accordance with the table set out in the evaluation model, and the reference to achieving that result by firstly scoring the indicators, aggregating them and then taking the totals for each dimension and aggregating them to arrive at a subtotal for the aspects, and then aggregating all of the aspects to arrive at a grand total for the result has been removed — has, in fact, been replaced earlier in the report, but removed from the section containing the concluding result.

.

What is clear from the notes to which I will refer in a moment of the meetings kept by Mr. McMahon, is that there appeared to have been some discussion concerning this approach to the evaluation and a degree of agreement amongst the members of the Evaluation Team that the report could not be expressed in terms of the table which I have just referred to as having been demoted to the body of the report.

.

This approach to the presentation of the results, however, seems to throw up a considerable number of inconsistencies, and as I’ll mention in a moment, seems to have given rise to considerable difficulties for a number of members of the Evaluation Team in endeavouring to check the results and to see whether the tots arrived at in the report could be justified by reference to the scores set out in the report.

.

Whatever, or however, the Evaluation Team agreed to present as the result of the evaluation process, it seems that by the 23rd October the question of weightings was giving rise to a considerable amount of debate and a number of problems. Mr. McMahon’s note of the 25th October makes clear that there were still very real concerns regarding the issue of weightings. And he notes that Mr. Andersen himself admitted that members of the Evaluation Team were still at odds at that point regarding the issue of weightings, and indeed in addition, regarding the related issues of ranking, grading, marks or points and so forth.

.

CHAIRMAN: I am keen, Mr. Healy, that we do get Mr. McMahon’s evidence underway for a reasonable portion of today, so even if it means perhaps not putting every —

.

MR. HEALY: We won’t go through all the documents.

.

MR. NESBITT: Mr. Chairman, I should indicate that I’ll be seeking time-out for this particular witness, so we can have some opportunity to assess what is now the theory which we asked for some weeks ago, that has been delivered on the hoof, and I would like the opportunity to review what has been said by My Friend, very helpfully, and the documentation which he says establishes it, to ensure that we ensure the evidence available to the Tribunal is the appropriate evidence. And I’d be asking for time to consider that this evening so the evidence could start tomorrow morning in relation to that matter.

.

I have spoken to the witness over lunch, because I had indicated to him that we had wished to know his views, and he would like to have the same opportunity to consider the documentation for the balance of the day in the hope of starting tomorrow morning.

.

CHAIRMAN: If he has expressed that view and you have advised on that basis, Mr. Nesbitt, I won’t force matters to go on today, and if needs be, we’ll start a little earlier tomorrow to compensate for time lost.

.

MR. NESBITT: I am much obliged.

.

MR. HEALY: From Mr. McQuaid’s evidence last Friday, and it would appear, from the documents referred to in Ms. Nic Lochlainn’s fax of the 6th October, some of the evaluators, in arriving at a score, firstly ranked or graded applicants by giving them a score based on a 5-point scale between A and E. The scores awarded to applicants in this way were then, for the purpose of subtotalling, converted to numbers, the weightings were then applied, and the marks were aggregated and the aggregate score was reconverted into letters, once again on the 5-point scale from A to E.

.

Now, it wasn’t clear from Mr. McQuaid’s evidence, it’s not clear from the evaluation report, and it’s not clear from the minutes of any of the meetings that this was done in any other or every other evaluation sub-group. While Mr. Andersen expressed his result in terms of letters on a 5-point scale from A to E, it seems that the evaluators, at least on the Irish side, were more comfortable with numbers, and indeed as far as we can see from other evidence to which I will refer, or other information, converted scores to numbers not just in Mr. McQuaid’s case, but in a number of other cases, to arrive at a result or to check a result.

.

I want to refer to one of these exercises, or perhaps more than one, maybe one or two of these exercises. In his marginal note of the second version of the evaluation report, that is the version of the 18th October, Book 55, at page 14, Mr. O’Callaghan drew attention to the fact that the dimensions in the table of the marketing aspects did not appear to have been ranked in accordance with their ranking in paragraph 19 of the RFP. He also drew attention to the fact that market development is not the dimension with the highest priority and with the highest weight, but that this was merely part of the dimension with the highest priority and the highest weight.

.

It is of interest that in two documents, which have only just been retrieved by the Tribunal, similar points were taken by Mr. Riordan and Mr. Buggy. Members of the Tribunal team are endeavouring to put all these documents together in a way which makes them reasonably accessible to interested persons, and I don’t think they have all been collated. I can certainly provide the Department with a copy on a provisional basis, in that it may be necessary to add to it, and I don’t propose to give it a number at this stage. And I may be able to make some copies available to other persons, again on a similarly provisional basis at the moment.

.

In the first unnumbered leaf of this book, there are a number of pages from Mr. Buggy’s version, it would appear, working version of the 18th October draft of the report. In due course the Tribunal proposes making the entire document available.

.

If you go to the second page of this document, you’ll see that Mr. Buggy was tackling the scoring attributed to the applicants under the heading “Marketing Aspects.” And you’ll see where in the right-hand margin he has written what appears to read, “Implied weighting favours market development, but tariffs is greater in RFP.” I think that’s a reference to the same point that was alluded to by Mr. O’Callaghan, and I think that the narrative was changed ultimately to remove any language which would tend to suggest that market development was the dimension with the highest priority or the highest weight.

.

But what is also of interest is that Mr. Buggy appears to have attempted to calculate in numeral terms the subtotals on marketing aspects. And in doing so, appears to have used the weightings which were ultimately used in Tables 16 and 17 of the final draft evaluation report. And you’ll see that, it would appear that he scores only the top two applicants, and he scores A3 at what would appear to be 165, and A5 at what would appear to be 157, and on a numeral scoring, and bearing in mind that Mr. Andersen himself queried the distorting effects that this type of scoring might have, it would appear that A5 should not have been accorded the highest score, but rather A3, or alternatively they should each have had or been accorded, as I think Mr. Buggy seems to suggest, the same score.

.

Mr. Billy Riordan appears to have carried out a similar or a number of similar exercises on his copy of the report in approaching the same table, and this is at the final unnumbered leaf of this additional book, and it’s on page 14. For some reason the page numbering is different, but in any case, in dealing with the same table, Mr. Buggy seems to embark on a similar exercise, although he applies different weightings, it will be noted, to market development, coverage and tariffs. And he applies, in fact, to market development, the weighting agreed to be applied to it in the quantitative proportion of the approved draft evaluation model, the same for coverage, the same for tariffs. There appears to be a mistake in the weighting he applies to the international roaming plan. But again, you’ll see that in his scoring, in which he scores the top two ranked applicants only, the score he gives to A5 is lower than the score he gives to A3, from which it would appear reasonable to suggest that perhaps the market aspects subtotal score for each of these applicants should have been reversed or should have been the same, or at least should have given rise to questions to which one should be able to see answers in the documentation.

.

CHAIRMAN: This is Mr. Riordan?

.

MR. HEALY: This is Mr. Riordan.

.

CHAIRMAN: I think just one earlier sentence may have suggested you had reverted to Mr. Buggy —

.

MR. HEALY: I am sorry, Sir. This work is the result of a very late night review of all of these documents.

.

It is possible, of course, that Mr. Riordan or Mr. Buggy may be able to enlighten the Tribunal as to what these exercises were intended to convey, and one hopes that, or at least assumes that they must have received some explanation for what they appear to have identified as fairly significant discrepancies in the report, and in the evaluation. What is significant is that both of them appear to have assumed that the dimensions were weighted at arriving at subtotals for the aspects. Mr. McQuaid, of course, has given evidence that he proceeded on that basis in arriving at subtotals. And has given evidence to the effect that this is still his view.

.

Mr. Brennan has given evidence that when he was first presented by Mr. Andersen with what purported to be the result of the evaluation process, he could not see a result, and this is what prompted him to convert the graded scores into numeral or numbered scores. And I think, as I have already mentioned, I may have suggested to him that this is an exercise which should have been carried out right throughout the whole process if it was to have any validity. I am not suggesting for the moment that it is a valid exercise. But it would appear that a number of people involved in the evaluation did think it had some validity and carried out a number of exercises based on the conversion of letters to numeral scores.

.

If the exercise being carried out by Mr. Buggy and Mr. Riordan was to be carried out on Table 15 in the final evaluation report, it could have the effect of either changing the result or narrowing the result even further. In the course of these remarks, I don’t propose to go into the details of all of these calculations, and I propose to make some of them available to the Department and to any other interested persons. But if all of the lettered grades contained in the evaluation report were changed into their corresponding numbers, and the entire evaluation process carried out by aggregating numbers, then the ultimate result, in percentage terms or in any other terms, could be much closer or perhaps could even be a different result or a different ranking than that which appears in the final report.

.

Now, it is not being suggested that any of these exercises could result in what is a fundamentally different ranking, for the following reason: That the report as it stands, seems to suggest on any reasonable reading that the two front runners were so close as to be indistinguishable, but what is of concern to the Tribunal is that having regard to the fact that the two front runners were so close, the result seems to have been presented, at least in political terms, as one which was a clear result.

.

When it is borne in mind that fundamental queries concerning the nature of the process and the role of the quantitative and qualitative evaluations were being queried right up to the 23rd, and when it is borne in mind that there were very serious concerns being raised concerning the application of, or the identity of the weightings right up to the 23rd, the Tribunal has to ask why civil servants went to Government or allowed a Minister to go to Government with a report in this condition or with a report about which such serious reservations were still being raised?

.

The Tribunal also has to view all of the evidence to date and any evidence to be given in the context of evidence and information available to the Tribunal, all of which has been mentioned in the Opening Statement, concerning interventions by the Minister at critical points in the process, and in particular, interventions by the Minister with a view to bringing the process to a conclusion in an accelerated basis.

.

The question which the Tribunal has to ask I think is: If there were not administrative pressures on civil servants to carry out this work in what would now appear, on one view at least, to be a somewhat unorthodox fashion, the Tribunal would wish to know and discover what other pressures, if any, were involved?

.

Now, I had hoped to perhaps go through some of the tables, Sir, but I think it might be of more assistance if I made some of these tables available to Mr. McMahon and to Mr. Nesbitt, so that they can cast their eye over them before any reference is made to them, and I should also say that what the Tribunal is seeking to do here is to extrapolate from some of the work being done by civil servants at this time, and not to suggest that it could or should substitute its own work or its own calculations for those of the civil servants involved. What is more, the Tribunal is not suggesting that it has any view concerning the validity of converting lettered or if you like, A, B, C, D, E scores into numbers, and there may be very strong — there may be very substantive reasons why this couldn’t be done or shouldn’t be done at all, as indeed Mr. Andersen seems to have suggested.

.

CHAIRMAN: Very good. Well, there is a not insubstantial transcript to be read, which I have no doubt that counsel may desire time to read, and I’ll certainly need to consider it further myself. As we have lost a little time, I think we should probably commence at half past ten tomorrow, and Mr. Healy, I take it that obviously Mr. Fitzsimons, Mr. McGonigal and Mr. Fanning, if they choose to have sight of any of these draft copies, will also be entitled to them?

.

MR. HEALY: Oh, of course.

.

CHAIRMAN: Very good. Half past ten in the morning. Thank you.

.

THE TRIBUNAL THEN ADJOURNED UNTIL THE FOLLOWING DAY, WEDNESDAY,
2ND APRIL, 2003, AT 10.30AM.

Comments are closed.