Taken for Granted

ESL, embedded processors, and more

DATE 2011: Tuesday March 15 – EDA will probably not be the source of big embedded software advances

Filed under: Uncategorized — March 15, 2011 @ 5:48 pm

Attending DATE 2011 today, listening to talks and panels, asking questions and meeting people and asking them for their views, I have realised that it is increasingly unlikely that the classical EDA industry, dominated by Synopsys – Cadence – Mentor as it is, will provide big advances in embedded software design and creation tools.

When asked about embedded software, what they really all talk about is virtual prototyping. Don’t get me wrong, virtual prototyping is extremely useful at both system architecture definition and analysis and embedded software verification and validation stages. I have participated in virtual prototyping activities for years, and continue to work on it with colleagues. These tools are very important where I work at Tensilica, and to ESL design and verification flows for many users.

However, just because a thing is a big thing, does not make it the only thing. Virtual prototypes are still post facto – after initial  software development is done.   Or, if you use them for architectural modelling with traffic models rather than target SW running to stimulate the system, then they might be quite orthogonal to software development.

What I wonder about is whether anyone in the commercial EDA tools industry is developing new capabilities for the a priori design and creation (automated synthesis, a.k.a code generation?) of complex embedded software systems. No mistake, this is a big leap, potentially expensive and with the rewards possibly quite elusive. But such advances in design methodologies and tools for embedded software may have to emerge from the large user companies, if they emerge from anywhere at all.

For the big EDA companies, the virtual prototype seems to be a near-universal cure-all for embedded software. I’m still wondering if there is something more (and I don’t think it is going to be automatic SW synthesis from UML models, for example).

It would be nice to be proven wrong!

Off to DATE 2011, Grenoble

Filed under: Uncategorized — March 12, 2011 @ 3:10 pm

I am about to leave for DATE 2011 in Grenoble, where I will be giving a talk on “The Virtual Design Requirements of Configurable Manycore Baseband Platforms”, part of a Hot Topic session I helped organise with Rainer Leupers on Virtual Manycore Platforms: Moving Towards 100+ Processor Cores.  (Wednesday March 16, 1100-1230).

Grenoble, France

Grenoble, France

I missed DATE 2010 in Dresden, although ironically I was in Dresden the following week on a business trip.   DATE has always been a conference of interest to me, because one of its strong focuses is System Level Design, ESL, and related areas.  This year promises to be no exception.   I have not figured out all the sessions I plan to attend, but the programme has too much to see for one person, so I will have to pick and choose.

I hope to both meet some old friends at DATE and meet some new ones, and I will try to write up notes on what I find interesting.  I have had very little time to write in this blog over the past year, but going away to a conference seems to provide an opportunity to reflect on interesting talks and research and with jet lag waking me up all too early every morning, a little time to try to write up some notes!

A la prochaine!

Call for Papers: EDPS Symposium 2011, Monterey, April 7-8

Filed under: Uncategorized — December 22, 2010 @ 3:58 pm

Something I have had a loose association with for a number of years is the Electronic Design Process Symposium, which is being held next year in April (7-8), always at the Monterey Beach Hotel in Monterey, California. I have talked there, given a keynote, been on panels, and served on the programme committee for it. The programme committee is a bit sticky – once you get on, you tend to stay on, even if you can’t contribute very much to it.    The last few years I have not been able to contribute much to it, except by blogging about it.

Beach Sunset Winter at Monterey Beach Hotel

Beach Sunset Winter at Monterey Beach Hotel

So this blog is the start of what I can help with for the 2011 version.   The Call for Papers has been issued, and you can find it here (the main web site is here).  Themes for 2011 proposed so far include

  • Parallel EDA
  • High-Level Design – including Requirements-Driven Design Flows
  • Cloud computing – including Software as a Service
  • Low-Power Design – including Solution Mapping to ITRS Roadmap
  • 3D ICs

and the submission deadline is February 28, 2011.

The symposium is pretty unique – from my recollections, between 30 and 50-60 people get together to discuss the process of design of electronic systems.  The topics vary widely although design technology (“EDA”) and changes in advanced design processes and the technology of design (e.g. 3D chips) are often covered, and it is not just hard-core hardware – embedded systems, software related issues, multicore and multiprocessor programming have all been discussed in the past.   The presentations are usually pretty interactive and can become informal discussions with many interesting insights forming part of that.

I have found it very interesting to be part of this in the past (and perhaps, from time to time, in the future).  If you haven’t looked in on it before, check it out and consider submitting a proposal for a presentation or other idea such as a panel, and if not participating in that way, consider attending and participating in the Q&A and discussions.

Formal Verification Outside the Box: Michael Theobald of D.E. Shaw Research

Filed under: Uncategorized — November 8, 2010 @ 11:50 pm

Today at the DVClub meeting in Silicon Valley, I attended a really interesting talk by Michael Theobald of D.E. Shaw Research, who described the use of Formal Verification techniques in the verification of Anton, their multi-node specialised computing machine designed to model molecular dynamics of protein folding as a vehicle for drug discovery and design.   The talk was entitled “Verification Challenges of DE Shaw Research’s Supercomputer”.

Anton has always been a particularly interesting project to us at Tensilica since part of the design uses Tensilica configurable, extensible processor technology.

However, the focus of Michael’s talk was on formal verification and in particular his work, with colleagues, on concepts of verifiability and DFV == Design For Verifiability.  Here they used formal verification (property checking) in a way that was outside the box – or at least outside the box that I am aware of.

The basic concept is to use formal verification methods to improve a design’s verifiability where it doesn’t work as well as use it where it does.  The process, as far as I understand it, consists of:

  1. automatically generating assertions for the design – for example, for branches controlled by a combination of conditions.  The generated assertion tests that the combination of the conditions reaches the particular logic.
  2. apply formal property checking to all the assertions.
  3. Normally, people applying property checking look at the properties or assertions that pass or fail in reasonable time, and throw away the unresolved properties (where the formal tool gives up or exceeds a run time limit).  However, this methodology uses the unresolved assertions
  4. Evey unresolved assertion is assumed to represent part of the design that is “hard to verify”.   Applying manual analysis to each of these parts of the design, the condition that made up the assertion is examined to see if it really could ever be satisfied.   If it cannot, that part of the design can be simplified.  If it can, an alternative verification strategy may be necessary, or the design should be restructured to make it more verifiable.
  5. Using this approach, design verifiability, quality and confidence can all be improved beyond what is possible by just running more simulation.

How is this out of the box?   To me, two attributes of the methodology were new and unexpected:

  1. Using automated methods to generate “boring” or “obvious” assertions, that were still useful in improving design verifiability
  2. Using the unresolved properties as the basis for design or verification improvement rather than throwing them away.

Maybe these are well-known techniques, but the approach taken by Michael Theobald certainly attracted a lot of interest in the audience.  He had many people come up to him after the talk.

All in all, an enjoyable lunch and talk at the DV Club.  If you have a branch near you, check out their next talk.

Review of Embedded Systems Week, October 24-27, 2010

Filed under: Uncategorized — October 27, 2010 @ 10:17 pm

I attended Embedded Systems Week in Scottsdale Arizona from Sunday evening to Wednesday this week. This is the merger of 3 separate conferences and a whole bunch of workshops (I don’t have time to stay and attend any of those).  As I wrote in last week’s Preview, I gave an invited talk on Monday:  “ESL 2015:  The inevitable move to software programmability”, and participated in the industrial panel today (Wednesday), “The future of embedded architectures”.

The conference had a number of highlights.  Vida Ilderim, VP, Intel Labs and Director of the Integrated Platform Research Lab gave a keynote talk about the embedded market:  “Challenges and Opportunities”.  This reviewed Intel’s SoC platform strategy in embedded – heterogeneous SoCs of processors and accelerators.   She also outlined tool opportunities in application-architecture co-design (“Function-architecture co-design” of old) and quick architectural exploration for design of derivatives from platforms.  As a PBD advocate from way back, this was all music to my ears.

John Hennessy, President of Stanford, and still a beacon in computer architecture, gave another keynote Tuesday, “The Future of Computing from Phones to Warehouses:  It’s a New Day” in which the move up to the cloud and down to the smart phone and tablet were noted as the two key trends.   He had a slide about the decline of desktop computer sales with numbers that I had not noticed before (I wasn’t looking) and it was clear looking around at the audience – more and more tablets (iPads) and more and more smart phones, with less and less recourse to laptops during the sessions than a few years ago.   A point he made (I think he was quoting someone) was a good line:  “Thread-level parallelism makes Moore’s Law the Programmer’s Problem….”

One talk on Monday was about new formal description mechanisms for software systems – “Components, Platforms and Possibilities:  Towards Generic Automation for MDA”  (MDA = Model-Driven-Architecture), discussing the FORMULA language and modelling project from Microsoft Research.  This was of interest because of a number of conceptual overlaps with the EDA/ESL-driven Rosetta modelling language and approach now being driven by Professor Perry Alexander of University of Kansas, Lawrence (with which I have had some association over the years).

In a session on SystemC-based synthesis, Mike McNamara had a standard slide which pointed out interestingly the various transitions in design style over the years – averaging 10 years or so – and then pointed out that the RTL/logic synthesis design style is now well over 15 years old – thus leading to the conclusion that “the next IC design methodology is long overdue” (a plug for High-Level Synthesis of course – and it has started, but still building and not the mainstream).

I particularly enjoyed the talk by Professor Krishna Palem on Tuesday – the IEEE McDowell Lecture – on “Compilers, Architectures and Synthesis for Embedded Computing:  Retrospect and Prospect”, in which with a historical sense, humour and insight, he talked about work he was involved in over the years including Proceler, and related projects such as HP’s PICO (which became Synfora), and various evolutions in the computing domain over the years.  As he pointed out, there were many threads of related development over the period and he could only touch on some of them.

The talk by Christoph Schumacher of RWTH Aachen on the parSC project, which is looking at parallel SystemC simulation on multi-core hosts, was of interest and tied into my own thinking that as we move to more complex architectures and basic simulation infrastructure (of which at systems level, SystemC is key) needs to improve radically.

Finally the industrial panel, well moderated by Professor Ed Lee of UC Berkeley (substituting for Alberto Sangiovanni-Vincentelli, who unfortunately could not come), was very enjoyable and I got to meet again my old friend and business trip fellow traveller Pierre Paulin of STMicroelectronics, and meet two new colleagues:   Pranav Mehta, CTO of Intel’s Embedded and Communications Group, and Nat Seshan, Director of DSP Architecture at TI.   This was an interesting panel – we all gave very brief overviews with just a few slides, and then had good questions from Ed Lee and from the audience.  These included “What drives the design evolution of our architecture?”, “What about reuse?”, “How can academic research be more relevant to industry?” and “What keeps us up at night?”.    One key driver for Tensilica is energy consumption – reducing which in our configurable, extensible processor technology is a strong imperative, and one where we have done pretty well and have interesting thoughts for the future.   We also touched on programming models, languages and paradigms, where the continued use of C (albeit with pragmas, directives, APIs, and use in constrained methodologies) looks like the mainstream for at least the next many years.

The 14th. NASCUG meeting (North American SystemC Users Group) was also held at ESWeek Monday evening.   All in all, a busy, enjoyable, educational and useful several days.

Preview of Embedded Systems Week, Scottsdale, Arizona, October 24-29, 2010

Filed under: Uncategorized — October 19, 2010 @ 4:32 pm

Next week I will be at Embedded Systems Week in Arizona, giving a talk and being on a panel.

Embedded Systems Week is the union of three conferences: CODES+ISSS (The International Conference on Hardware-Software Codesign and System Synthesis), CASES (International Conference on Compilers, Architectures and Synthesis for Embedded Systems) and EMSOFT (International Conference on Embedded Software). Several years ago, the groups responsible for these three conferences had the foresight to realise that they could build one much more interesting and stronger conference by holding the three of them together and allowing attendees to attend all of them, rather than continue to be isolated events in different places and times. In addition, ES week holds several interesting workshops and tutorials which you can read about in the programme.

I will be giving a talk on Monday October 25, as part of a special session on “From ESL 2010 to ESL 2015″, where I will be speaking on the topic “ESL2015: The inevitable move to software programmability”. Then on Wednesday October 27 I will be part of a special Industrial panel: “The future of embedded architectures”. There are many interesting sessions I am looking forward to attending at ESWeek as well:

  • Monday October 25, 15:30-17:30 – Session 3A, Optimising Multiprocessor and NoC platforms for performance, QoS and reliability
  • Tuesday October 26, 10:00-12:00 – Session 4A: MPSoC: Analysis and Synthesis
  • Tuesday October 26, 13:00-15:00 – Embedded tutorial on SystemC synthesis
  • Tuesday October 26, 15:30-17:30 – Special Session with a talk by Krishna Palem on CASES retrospect and prospect
  • Wednesday October 27, 13:00-15:00 – Special session on Unconventional fabrics, architectures and models for future multi-core systems

Finally, the 3 keynote addresses – By Vida Ilderem of Intel Labs, John Hennessy of Stanford, and Tom Henzinger of IST Austria, all look like great ways to start each day with something though-provoking.

I hope to see you there if you can make it.

23rd Synopsys EDA Interoperability Forum, Thursday 21 October 2010, Santa Clara

Filed under: Uncategorized — October 11, 2010 @ 9:44 pm

I have been invited to participate in a panel session next week as part of the 23rd Synopsys Interoperability Forum, Thursday, 21 October 2010, at Agnews Historic Park in Santa Clara. You can read about the forum here and register on the web page.

The panel session will consist of some short presentations and then a panel moderated by Will Strauss – the session starts at 1300 . The topic is  “The Importance of System-level Tool Interoperability in the Wireless Supply Chain”, and the abstract reads

“Wireless system design requires collaboration throughout the supply chain to define, develop, and deploy new products and services infrastructure to the market in a timely manner. With system level solutions now in full production for new projects, how does tool interoperability and system-level modeling standards impact this collaboration? In this session, wireless industry experts will discuss the importance of system-level tool interoperability in the wireless supply chain and its impact on the innovation and development of these ubiquitous and complex HW-SW devices.”

This topic is highly relevant to Tensilica and a lot of the work I have done the last few years on system modelling and baseband for wireless products involving configurable, extensible processors – from one to many cores . I hope some of you will be interested in the workshop, and hope to see you there.  There are also other sessions on standards, verifcation, a lunchtime keynote by Frank Schirrmeister, and other presentations to keep your interest.

Book Review: TLM-Driven Design and Verification Methodology

Filed under: Uncategorized — October 10, 2010 @ 3:16 pm

In late July I received a copy of a new book published by Cadence:   TLM-Driven Design and Verification Methodology.   I should mention up front that I know four of the authors well:   Brian Bailey, Felice Balarin, Mike McNamara and Yoshi Watanabe.   The other two authors are Guy Mosenson and Michael Stellfox.

The book is available from Lulu in a print-on-demand form here (price $US 110.99, for the physical book); from Amazon electronically for the Kindle here (digital list price $US 84.99, although selling for $77.21 on Sunday October 10 when I checked); and from Amazon in paperback form here (list price $110.99, but selling for $85.79 on Sunday October 10).   You can read more about the book and other people’s comments on it at the Cadence web site here.

It is part of a growing trend of corporate-sponsored technical books being made available in new forms such as print-on-demand and electronically, rather than via conventional publishers.  Other examples of this trend are two other books  – one from Cadence, one from Mentor Graphics:

  • A Practical guide to Adopting the Universal Verification Methodology by Sharon Rosenberg and Kathleen Meade.  You can read about it here.
  • High-Level Synthesis Blue Book, by Michael Fingeroff.  You can read about it here.

As corporate-sponsored books, they of course will feature the sponsoring company’s tools in giving examples of the methodologies they discuss.  This is of course both natural and to be expected – illustrating the concepts in action is valuable, and they have to be done with some tools whether academic or commercial.    I have used Tensilica’s tools and IP  as examples in several chapters in several books, most notably in my recent book co-authored with Brian Bailey, ESL Models and their Applications:  Electronic System Level Design and Verification in Practice, from Springer.

I should also mention that I was not very quick in reading the TLM-Driven Design book, taking over two months to finish it.   The only thing I will plead is pressure of work, which also explains my small number of blog posts over the last year (see Too Busy to Fulminate).

So on to the book.   First, it is well-done:  well-written, nicely laid out, with excellent colour drawings and screen shots (some of which are just a little too small for my aging eyes, but readable when I looked closely), and high production values.  My review copy may not have been quite the final version but it was nicely done all the same.

Second, it contains a lot of good methodological advice, both in explicit chapters and sprinkled throughout.  As someone who has long believed that design methodology comes first and good tools come second, it was gratifying to see this focus.  Although the methodology is implemented in detail with Cadence tools, many of the steps involved in the implicit and explicit design flows can be accomplished with competitive commercial,  academic or internal tools.   One good example:   chapter 3 describes five levels of abstraction in the TLM methodology:  pure functional, Functional Virtual Prototype (FVP)-ready, High Level Synthesis (HLS)-Ready TLM (Transaction Level), HLS-Ready Signal Level, and RTL.   Following much of the design flow through these various levels can be accomplished with (mostly) Cadence tools, or can be accomplished with a set of tools without anything from Cadence in them.   This is a good example of a design methodology that goes beyond a particular set of commercial tools.

As a result of a focus on methodology, there are some very useful lessons and points contained in this book that should be of interest to the general electronic design community, going well beyond those who are Cadence tool users.  The preface and the first three chapters – preface and  chapter one with an overview of design issues and processes, needs and requirements, and attributes of a TLM-based methodology; chapter two with an overview of languages for TLM; and chapter three with an overview of the Cadence TLM-driven methodology – will be of general value to a wide readership.

The heart of the book is chapters 4 and 5, on high-level synthesis, and the supporting Appendix A on the SystemC synthesisable subset.  Chapter four is a good overview of high-level synthesis fundamentals (that is the name of the chapter).  Chapter 5 goes into considerable detail on the Cadence C-to-Silicon Compiler tool with worked examples to illustrate how the tool works and what it can offer designers.  As well as dataflow examples, this chapter also gives some insight into control oriented HLS and also using HLS in ECO (Engineering Change Order) design – that is, incremental modification of a design and how that is reflected in a tool flow.   This latter has not been addressed much in most writing on HLS.

Although chapters 4 and 5 have a good set of references, one additional one that is a useful overview of HLS is the book edited by Philippe Coussy and Adam Morawiec:  High-Level Synthesis: from Algorithm to Digital Circuit (Springer, 2008).

I was disappointed in Chapters 6 and 7, which conclude the book.  They are both on verification – chapter 6, Verification Fundamentals, and chapter 7, TLM-driven verification flow.   They cover Cadence verification tools in the context of the UVM – Universal Verification Methodology.   They are somewhat repetitive, in going over similar ground twice.  It might have been better to recut these two chapters into two on verification with the methodology and the practice using Cadence tools a little more blended.  This could have reduced some of the repetition.

One other aspect of these latter two chapters is that I could not get a strong sense of the relative advantages and disadvantages of the two primary languages used and illustrated – e and SystemVerilog.   Cadence has been fence-sitting for a long time on the verification language issues – somewhat naturally, given its acquisition of Specman and e, and the desire to evolve its tools to support SystemVerilog.   As is acknowledged in chapter two, this book is more e-centric, perhaps reflecting a desire by Cadence to reinforce its strong position with this  still mostly proprietary language (it has an IEEE standard, 1647, but is still primarily supported by Cadence).  A little more clarity on these points would be useful but perhaps too much to expect from this book.

However, these are relatively small quibbles.   Overall, the book is most useful for its general guidance on modern methodologies, and its good overview of HLS and the Cadence C-to-Silicon HLS tool.   If you are interested in modern TLM-based design approaches, I think this is well worth a look.

Day 4 and 5 of DAC 2010: SOC Enablement, High-Level Synthesis and Heterogeneous Systems

Filed under: Uncategorized — June 20, 2010 @ 12:53 pm

On Thursday June 17, DAC had a special set of sessions labelled “Embedded/SOC Enablement Day”, with talks from a variety of people from a variety of companies talking about their strategies and tradeoff choices for complex SOCs, usually in embedded systems. This included a keynote by Gadi Singer of Intel. As I reported in my note on Tuesday, the “P-word” (Platform, and Platform-based-design) have returned with a vengeance.  This was noticeable in the talks by Gadi Singer (Intel), Yervant Zorian (Virage Logic), John Bruggeman (Cadence – and a passionate speaker about EDA 360 – this was the first time I had seen him talk), Ivo Bolsens (Xilinx), Shauh-Teh Juang (TSMC)  and Rob Aitken (ARM).   Ten Years After (well, maybe eleven or twelve) and the design approach we observed, and predicted would become ubiquitous, has indeed become accepted and ubiquitous.

Later that day I attended the Panel Discussion on “What Input Language is the best choice for High-Level Synthesis (HLS)?”   No real surprises here, with 5 of 6 panelists being from the C/C++/SystemC camp and one being from Bluespec (Rishiyur Nikhil), that C and its variants was the preference advocated by the majority of the panelists.  Dan Gajski of UC Irvine had tried with some pre-questioning of the panelists and summarising their responses along various lines, to get some reasons for their preferences (features and capabilities).   As someone a little biased to C/C++ with selective use of SystemC where necessary to express some communications aspects of systems, the idea of teaching designers a new language, as with Bluespec (perhaps better described as a new semantic expressed in extensions to SystemVerilog) is less appealing from a practical point of view.  Although, looking at reference code, as I asked the panelists, one wonders about the level of teaching required for the “older” languages, as much of it can be very poorly written.   It seems inevitable that C/C++/SystemC and combinations thereof will continue to be the mainstream input form for the considerable future.

On the last day of DAC (Friday), I was part of a tutorial on “SystemC for Holistic System Design with Digital Hardware, Analog Hardware, and Software”, talking about Application-Specific Instruction set Processor design (ASIPs) using Tensilica as an example, hardware-software tradeoffs in this style of design, and how via SystemC-based system models, it fits into modelling higher level heterogeneous systems.   My fellow tutorial instructors talked about analogue and mixed-signal design and verification using system-level modelling approaches.   It seemed a fitting end to what overall I think must be judged a successful DAC and one that reflected some recovery in the electronic design industry.   I see DAC has released their preliminary attendance figures which seemed a little up on last year, at least in several categories.

Day 3 of DAC 2010: Snatching Victory for NOCs from the Jaws of Confusion

Filed under: Uncategorized — June 17, 2010 @ 10:35 pm

On Wednesday June 16 I moderated a special session at DAC 2010 in the morning: “A Decade of NOC Research – Where Do We Stand?”.  This was an extremely interesting special session, organised by Anand Ragunathan and Sri Parameswaran, with three excellent speakers:

  1. Giovanni De Micheli of EPF Lausanne, Switzerland, who gave an overview of NOCs (Networks on Chips):  “Networks on Chips:  From Research to Products.
  2. Kees Goossens of TU Eindhoven, the Netherlands, who talked more deeply on “The Aethereal Network-on-Chip after Ten Years:  Goals, Evolution, Lessons, and Future”
  3. Bruce Mathewson, AMBA architect and Fellow at ARM in Cambridge, UK:  “The Evolution of SOC Interconnect and How NOC Fits Within it”.

Each talk was highly informative and entertaining, and the sum of the three gave a really good view of NOC past present and future.  The up to 90 people in the room seemed to agree.   Each talk had questions, and then we concluded the two hour session with a panel discussion of 30 minutes on relevant NOC questions.   The audience provided every question (I took the opportunity to lob in one of my own, but it was not necessary) and there was an excellent debate on NOC.

Kaist NOC research chip, Korea

Kaist NOC research chip, Korea

One issue that came to the fore in the talks and panel discussion is that the definition of NOC is extremely elastic.   It can be defined by characteristics, but many of the attributes of a NOC have been influential in advanced hierarchical bus style interconnect such as the relatively recent AMBA4 by ARM.   As a result, I would suggest it is time to think of retiring the NOC term and use “advanced interconnect” instead.   Future complex SOCs will use interconnect that incorporates attributes and concepts drawn from buses, point to point interconnect and NOCs and may have several different styles used in different subsystems,  with a chip-level interconnect concept suitable to the application.   Thus perhaps without having very many commercial examples of chips that are “pure NOCs”, we can declare victory, retire the term, and move onto the more important issues of specifying, designing and verifying the complex interconnect schemes that future designs will need.

A second interesting issue is the growing importance of tools for designing advanced interconnect for SOC, whether bus or NOC based or mixtures of all types.   In the past design groups could muddle through evolving from legacy buses, but this looks less likely in the future and NOCs in particular need modeling and analysis tools to make sure they are right for the application, and implemented with the right characteristics.  The EDA industry in particular seems to be letting the side down here, as the panelists and audience did not see EDA/ESL vendors offering anything much in this domain.   At a crucial time for the EDA industry and with major changes afoot, highlighting this future opportunity at DAC sounds like the right thing to do.

My thanks again to the organisers and speakers.  It was a great session of high value to all who attended.   I was glad that I was asked to help out.