AI

Leveled up: Auditability of AI and Machine Learning

Milan Kratochvil

Architects and many others remember the tightrope walk between flexibility/performance and testability/predictability/V&V in systems with many run-time parameters, or parallelism, or late binding time ranging from polymorphism to SOA-UDDI and ad-hoc computing. Now, it’s leveled up by ML (machine learning). Essentially the same tradeoff, but growing broader and trickier. 

Black box 

ML stirs up the fire; despite its roots (rule induction and mining) in the successful decryption of an unbreakable cipher, its near future looks encoded in weight values somewhere in deep neural networks. Whereas black-box flight recorders clarified the chain of events & decisions in past emergencies, more and more IT is now landing in black boxes that hide opaque logic.

Predictability wasn’t a big deal in consumer IT and entertainment (when Youtube or Spotify wrongly offered you a title you were avoiding like the plague, you rarely asked why)… If you just say “skis this wide apart look amusing”, I guess you’re in consumer IT, but if you insist on a layer-by-layer explanation why most artificial vision systems have a hard time in strong sunshine on white slopes, I bet you’re in corporate (image: Ski Robot Challenge, Korea). 

Tackle it one-way or two-way

Business apps are very different from apps for billions of consumers (see slides 9 to 15 in this talk by Oracle’s VP at SICS). Your enterprise or team can tackle the leveled-up tradeoff both top-down and bottom-up:

  • assuring a framework of corporate values and procedures (particularly transparency, governance & compliance, accountability, and a security & safety culture)
  • applying appropriate technologies and practices in IT to build in mechanisms upfront  for auditability, comprehensibility, predictability, traceability, testability/V&V (as well as fraud-prevention, such as restricted access to learning-data sets).       

On the latter (bottom-up) part, there’s ongoing AI research to “unpack” the opaque logic buried within deep learning systems, and to give them an ability to explain themselves. DARPA’s Explainable AI Program, XAI , aims at ML techniques (new or improved) that produce more explainable models, while maintaining a high level of prediction accuracy. New machine-learning systems will have the ability to explain their rationale, strengths, weaknesses, etc.

Hybrid-AI tech vendors often address organizations with more constrained schedules, budgets and levels of AI expertise. Hybrid learning systems combine “subsymbolic” ML with transparent symbolic computation (typically, wellknown knowledge-processing techniques). The combination lowers the total cost of entry into AI and ML, because it evolves from logic that domain experts already know (rules, decision trees, etc.)

From there, hybrid systems employ ML iteratively to fine-tune this explicit logic: for example, to narrow the IF-part of a rule to factors that prove most significant. That is, results of ML from big data decide about variables to be included (or omitted), about intervalization of a continuum of values, or about relevant threshold values of a particular variable.

Notably, a rule is still expressed as a rule yet with an ever-smarter and more accurate IF-part. This is transparent to humans, and paves the way to embedding AI and ML into daily IT-dev practice: devs and architects will gradually find thousands of decision points, enterprise-wide, suited for small AI apps in daily business. Those will generate valuable skills, know-how, and “tip feeling” as to where ML can work (or can’t).

Models, animations, transparency

Once the opaque logic is unpacked, or expressed as rules or trees, it’s time to revive your team’s modeling skills. Long story short, a decision tree (or an invocation path through a rule base) is an excellent input to animations or test executions of different scenarios, to make them transparent even to stakeholders and non-IT roles. That story is worth another blog post, later this spring.

information om författaren:
Milan Kratochvil
Trainer at Informator, senior modeling and architecture consultant at Kiseldalen.com, main author : UML Extra Light (Cambridge University Press) and Growing Modular (Springer), Advanced UML2 Professional (OCUP cert level 3/3).Milan and Informator collaborate since 1996 on architecture, modelling, UML, requirements, and design. 

 

Nyckelord: arkitektur, machine learning

22 kommentar/er till “Leveled up: Auditability of AI and Machine Learning”

  1. Excellent Blog! I would like to thank for the efforts you have made in writing this post. I am hoping the same best work from you in the future as well. I wanted to thank you for this websites! Thanks for sharing. Great websites!
    machine learning course bangalore

  2. Excellent Blog! I would like to thank for the efforts you have made in writing this post. I am hoping the same best work from you in the future as well. I wanted to thank you for this websites! Thanks for sharing. Great websites!
    machine learning course bangalore

  3. Excellent Blog! I would like to thank for the efforts you have made in writing this post. I am hoping the same best work from you in the future as well. I wanted to thank you for this websites! Thanks for sharing. Great websites!
    machine learning course bangalore

  4. Awesome blog. I enjoyed reading your articles. This is truly a great read for me. I have bookmarked it and I am looking forward to reading new articles. Keep up the good work!

  5. Such a very useful article. Very interesting to read this article.I would like to thank you for the efforts you had made for writing this awesome article. I would like to state about something which creates curiosity in knowing more about it. It is a part of our daily routine life which we usually don`t notice in all the things which turns the dreams in to real experiences. Back from the ages, we have been growing and world is evolving at a pace lying on the shoulder of technology."data science courses" will be a great piece added to the term technology. Cheer for more ideas & innovation which are part of evolution.

  6. The development of artificial intelligence (AI) has propelled more programming architects, information scientists, and different experts to investigate the plausibility of a vocation in machine learning. Notwithstanding, a few newcomers will in general spotlight a lot on hypothesis and insufficient on commonsense application. Machine Learning Final Year Projects In case you will succeed, you have to begin building machine learning projects in the near future.

    Projects assist you with improving your applied ML skills rapidly while allowing you to investigate an intriguing point. Furthermore, you can include projects into your portfolio, making it simpler to get a vocation, discover cool profession openings, and Final Year Project Centers in Chennai even arrange a more significant compensation.

    Data analytics is the study of dissecting crude data so as to make decisions about that data. Data analytics advances and procedures are generally utilized in business ventures to empower associations to settle on progressively Python Training in Chennai educated business choices. In the present worldwide commercial center, it isn't sufficient to assemble data and do the math; you should realize how to apply that data to genuine situations such that will affect conduct. In the program you will initially gain proficiency with the specialized skills, including R and Python dialects most usually utilized in data analytics programming and usage; Python Training in Chennai at that point center around the commonsense application, in view of genuine business issues in a scope of industry segments, for example, wellbeing, promoting and account.

Kommentarer är inaktiverade.

Lämna en kommentar