Establishing the genuinity of remote computer systems

USENIX Security, pp.21-21, (2003)

Cited by: 213|Views131
EI
Full Text
Bibtex
Weibo

Abstract

A fundamental problem in distributed computing environments involves determining whether a remote computer system can be trusted to autonomously access secure resources via a network. In this paper, we describe a means by which a remote computer system can be challenged to demonstrate that it is genuine and trustworthy. Upon passing a tes...More

Code:

Data:

Introduction
  • For most types of valuable real-world objects, there are generally accepted methods of assessing their genuinity in a non-destructive fashion.
  • The authors can discern real diamonds from imitations by examining their electrical characteristics and their indices of refraction.
  • The authors have few such measures for computer systems.
  • When answering the question of whether a computer system is real, the authors can only verify that it looks like a computer and acts like a computer.
  • The dynamic nature of a programmable computer means that it may not always behave the same in the future.
  • When that computer system is moved from the immediate presence, the authors have few guarantees that it
Highlights
  • For most types of valuable real-world objects, there are generally accepted methods of assessing their genuinity in a non-destructive fashion
  • When that computer system is moved from our immediate presence, we have few guarantees that it
  • We introduce the need for a remote system genuinity test with a motivating example: Suppose Alice is the conscientious administrator of a network of computer systems that rely on a central NFS [42] server
  • Several projects have been dedicated to the development of secure bootloaders that allow a computer system to authenticate the software that it loads [6, 11, 26, 50] by using a cryptographic secret stored in a secure coprocessor or on a smartcard
  • As long as the genuinity test can verify that the kernel is running and in control of the system, a secure bootloader for the kernel offers no additional benefit
  • A great deal of media attention has been focussed on the Trusted Computing Platform Alliance (TCPA) [1], Microsoft’s Palladium Initiative [14], and Intel’s LaGrande [35]
Conclusion
  • Several projects have been dedicated to the development of secure bootloaders that allow a computer system to authenticate the software that it loads [6, 11, 26, 50] by using a cryptographic secret stored in a secure coprocessor or on a smartcard
  • These systems require the integration of a secure BIOS or special loader to guarantee that no hostile code becomes operational on the machine in question.
  • The authors can point out two primary differences between these systems and the work:
Related work
  • Our work is the first to deal specifically with the problem of determining whether a remote computer is a real computer. Nevertheless, there are several related projects that propose alternative methods for creating trusted remote systems.

    6.1 Execution Verification

    We are not the first to leverage the time delay inherent in the computational complexity of a problem to prove characteristics of an operating environment. Much work has been done in constructing programs that can check their work to ensure their integrity [9, 47] as well as making sure that they have not taken shortcuts in their execution [10, 20]. In particular, Jakobsson and Juels characterized and articulated the concept of a proof of work [27] that is similar to the rationale by which our checksum works. In their taxonomy, the result of our memory checksum during directed exercising of the CPU constitutes a bread pudding protocol—a way of reusing a hard-to-compute result for another purpose.

    Monrose, Wyckoff and Rubin illustrate a system similar to ours that verified the correct execution of code on a remote Java Virtual Machine [34] to detect the possibility that the JVM had cheated on a calculation. An important distinction between their work and ours is that a JVM, being a simulator, is always susceptible to potential eavesdropping by its controller and its data could be manipulated at any time. Execution verification must be performed at all times for a JVM rather than once at boot time in order to confirm determinism of computational results.
Reference
  • Trusted Computing Platform Aliance. TCPA main specification. http://www.trustedcomputing.org/.
    Findings
  • Ross Anderson. http://www.cl.cam.ac.uk/users/rja14/tcpafaq.html.
    Findings
  • AOL. The America Online Instant Messenger Application. http://www.aol.com/, 2002.
    Findings
  • William A. Arbaugh. Improving the TCPA. IEEE Computer, 35:77–79, August 2002.
    Google ScholarLocate open access versionFindings
  • William A. Arbaugh. The TCPA; what’s wrong; what’s right and what to do about it. http://www.cs.umd.edu/̃waa/TCPA/TCPAgoodnbad.pdf, July 2002.
    Findings
  • William A. Arbaugh, David J. Farber, and Jonathan M. Smith. A Secure and Reliable Bootstrap Architecture. In IEEE Symposium on Security and Privacy, pages 65–71, May 1997.
    Google ScholarLocate open access versionFindings
  • R. Bedichek. Some efficient architecture simulation techniques. In Proceedings of the USENIX Winter 1990 Technical Conference, pages 53–64, Berkeley, CA, 1990. USENIX Association.
    Google ScholarLocate open access versionFindings
  • S. M. Bellovin and M. Merritt. An attack on the interlock protocol when used for authentication. IEEE Transactions on Information Theory, 40(1):273–275, January 1994.
    Google ScholarLocate open access versionFindings
  • Manuel Blum and Sampath Kannan. Designing programs that check their work. In ACM Symposium on Theory of Computing, pages 86–97, 1989.
    Google ScholarLocate open access versionFindings
  • Jin-Yi Cai, Richard J. Lipton, Robert Sedgewick, and Andrew Chi-Chih Yao. Towards uncheatable benchmarks. In Structure in Complexity Theory Conference, pages 2–11, 1993.
    Google ScholarLocate open access versionFindings
  • Paul C. Clark and Lance J. Hoffman. Bits: A smartcard protected operating system. Communications of the ACM, 37(11):66–94, November 1994.
    Google ScholarLocate open access versionFindings
  • Christian Collberg and Clark Thomborson. On the limits of software watermarking. Technical Report 164, University of Auckland Dept. of Computer Science, August 1998.
    Google ScholarFindings
  • Christian Collberg and Clark Thomborson. Watermarking, tamper-proofing, and obfuscation—tools for software protection. Technical Report 170, University of Auckland Dept. of Computer Science, February 2000.
    Google ScholarFindings
  • Microsoft Corporation. http://www.microsoft.com/presspass/features/
    Findings
  • 2002/jul02/0724palladiumwp.asp.
    Google ScholarFindings
  • [15] Donald W. Davies and Wyn L. Price. Security for Computer Networks, second ed. John Wiley & Sons, second edition, 1989.
    Google ScholarFindings
  • [16] Whitfield Diffie and Martin E. Hellman. New directions in cryptography. IEEE Transactions on Information Theory, IT-11:644–654, November 1976.
    Google ScholarLocate open access versionFindings
  • [17] Joan G. Dyer, Mark Lindemann, Ronald Perez, Reiner Sailer, Leendert van Doorn, Sean Smith, and Steve Weingart. Building the IBM 4758 secure coprocessor. IEEE Computer, 34(10):57–66, October 2001.
    Google ScholarLocate open access versionFindings
  • [18] Paul England and Marcus Peinado. Authenticated operation of open computing devices. In Proceedings of the 7th Australasian Conference on Information Security and Privacy, pages 346–361. Springer-Verlag, 2002.
    Google ScholarLocate open access versionFindings
  • [19] Free Software Foundation. GNU GRUB. http://www.gnu.org/software/grub/grub.html, 2003.
    Findings
  • [20] Philippe Golle and Ilya Mironov. Uncheatable distributed computations. In CT-RSA, pages 425–440, 2001.
    Google ScholarLocate open access versionFindings
  • [21] D. Grover. The protection of computer software: Its technology and applications. In The British Computer Society Monographs in Informatics, chapter Program Identification. Cambridge University Press, second edition, 1992.
    Google ScholarLocate open access versionFindings
  • [22] Mark Hachman. Via adds to “Nehemiah”, plans mobile http://www.extremetech.com/article2/
    Findings
  • 0,3973,838362,00.asp, 2003.
    Google ScholarFindings
  • [23] Fritz Hohl. Time limited blackbox security: Protecting mobile agents from malicious hosts. Lecture Notes in Computer Science, 1419:92–113, 1998.
    Google ScholarLocate open access versionFindings
  • [24] VIA Technologies Inc. The VIA Padlock Data Encryption Engine. http://www.via.com.tw/en/viac3/padlock.jsp, 2003.
    Findings
  • [25] VMware, Inc. The VMware workstation simulator. http://www.vmware.com/, 2002.
    Findings
  • [26] N. Itoi, W. A. Arbaugh, J. McHugh, and W. L. Fithen. Personal secure booting. In Proceedings of the Sixth Australian Conference on Information Security and Privacy, pages 130–144, July 2001.
    Google ScholarLocate open access versionFindings
  • [27] Markus Jakobsson and Ari Juels. Proofs of work and bread pudding protocols. In IFIP TC6 and TC11 Joint Working Conference on Communications and Multimedia Security (CMS ’99), Leuven, Belgium. Kluwer, September 1999.
    Google ScholarLocate open access versionFindings
  • [28] Steven Kent and Randall Atkinson. Security architecture for the internet protocol. IETF RFC 2401, November 1998.
    Google ScholarFindings
  • [29] John B. Lacy, Donald P. Mitchell, and William M. Schell. CryptoLib: Cryptography in Software. In UNIX Security Symposium IV Proceedings. USENIX Association, 1993.
    Google ScholarLocate open access versionFindings
  • [30] Kevin Lawton. http://plex86.org/, 2002.
    Findings
  • [31] Kevin Lawton. Bochs: The Open Source IA-32 Emulation Project. http://bochs.sourceforge.net/, 2003.
    Findings
  • [32] Peter S. Magnusson, Magnus Christensson, Jesper Eskilson, Daniel Forsgren, Gustav Hallberg, Johan Hogberg, Fredrik Larsson, Adreas Moestedt, and Bengt Werner. Simics: A full system simulation platform. IEEE Computer, 35(2):50–58, February 2002.
    Google ScholarLocate open access versionFindings
  • [33] Peter S. Magnusson and Bengt Werner. Some efficient techniques for simulating memory. Technical Report R94-16, Swedish Institute of Computer Science, August 1994.
    Google ScholarFindings
  • [34] Fabian Monrose, Peter Wyckoff, and Aviel D. Rubin. Distributed execution with remote audit. In ISOC Network and Distributed System Security Symposium, pages 103–113, 1999.
    Google ScholarLocate open access versionFindings
  • [35] Paul Otellini. Intel developer forum, fall 2002, keynote speech.
    Google ScholarFindings
  • http://www.intel.com/pressroom/archive/
    Findings
  • [36] Etherboot Project. http://www.etherboot.org/, 2003.
    Findings
  • [37] Niels Provos. Encrypted virtual memory. In Proceedings of the 9th USENIX Security Symposium. USENIX Association, 2000.
    Google ScholarLocate open access versionFindings
  • [38] PyxisSystemsTechnologies. AIM/oscar protocol specification: Section 3: Connection management. http://aimdoc.sourceforge.net/faim/protocol/section3.html, 2002.
    Locate open access versionFindings
  • [39] Ronald L. Rivest and Adi Shamir. How to expose an eavesdropper. Communications of the ACM, 27(4):393–395, April 1984.
    Google ScholarLocate open access versionFindings
  • [40] John Scott Robin and Cynthia E Irvine. Analysis of the Intel Pentium’s Ability to Support a Secure Virtual Machine Monitor. In Proceedings of the 9th USENIX Security Symposium. USENIX Association, 2000.
    Google ScholarLocate open access versionFindings
  • [41] David Safford. http://www.research.ibm.com/gsal/tcpa/
    Findings
  • why tcpa.pdf, October 2002.
    Google ScholarFindings
  • [42] R. Sandberg, D. Goldberg, S. Kleiman, D. Walsh, and B. Lyon. Design and implementation of the Sun Network Filesystem. In Summer 1985 USENIX Conference, 1985.
    Google ScholarLocate open access versionFindings
  • [43] Seth Schoen. http://www.activewin.com/articles/2002/pd.shtml.
    Findings
  • [44] Sean W. Smith, Elaine R. Palmer, and Steve Weingart. Using a high-performance, programmable secure coprocessor. In Financial Cryptography, pages 73–89, 1998.
    Google ScholarLocate open access versionFindings
  • [45] A.M Turing. On computable numbers: With an application to the entscheidungsproblem. Proceedings of the London Mathematical Society, 42:230– 265, 1936.
    Google ScholarLocate open access versionFindings
  • [46] L. van Doorn, G. Ballintijn, and W. A. Arbaugh. Design and implementation of signed executables for linux. Technical Report HPL-2001-227, University of Maryland, College Park, June 2001.
    Google ScholarFindings
  • [47] Hal Wasserman and Manuel Blum. Software reliability via run-time result-checking. Journal of the ACM, 44(6):826–849, 1997.
    Google ScholarLocate open access versionFindings
  • [48] Steve R. White and Liam Comerford. ABYSS: A trusted architecture for software protection. In IEEE Symposium on Security and Privacy, pages 38–51, 1987.
    Google ScholarLocate open access versionFindings
  • [49] Emmett Witchel and Mendel Rosenblum. Embra: Fast and flexible machine simulation. In Measurement and Modeling of Computer Systems, pages 68–79, 1996.
    Google ScholarLocate open access versionFindings
  • [50] Bennet Yee. Using Secure Coprocessors. PhD thesis, Carnegie Mellon University, May 1994.
    Google ScholarFindings
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科