Deployment repeatability

Testing Large Ultra-Lightweight Spacecraft(2016)

引用 0|浏览0
暂无评分
摘要
Every time a structure is deployed, it will have a slightly different deployed shape, and this deploymentto-deployment variation is part of the shape error budget. This shape variation can be estimated by combining the variation in every deformable part, or by testing the full structure. In the latter case, evaluating the deployment repeatability builds upon the testing or analysis of deployment kinematics (Chapter 6) and adds repetition. Introduction Repeatability is synonymous with precision. Lake et al [1] give a simple definition of deployment precision: “The error in the final deployed shape of a structure as compared to its ground-measured shape.” This is the basis for the definition of repeatability, but a few adjustments must be made in the context of large space structures. First, large structures are not necessarily expected to match a groundmeasured shape, but some predicted shape that may combine ground measurements and analysis. Second, we want to be clear that repeatability isn’t about a single final deployed position, but the size of the envelope of statistically likely deployed shapes. A project may come to the characterization of repeatability having solved their problems with accuracy in an average sense, but with a concern that the ultimate deployment in space may not match the preflight shape prediction. Conversely, a study of repeatability may come before the study of accuracy, because a structure with millimeter-level repeatability need not undergo analysis to establish micronlevel accuracy, but instead should use a shape adjustment mechanism for micron-level alignment. Deployment repeatability contributes to thee shape error budget alongside post-deployment stability. Each of these reflect a range of actual structural shapes, but while deployment repeatability addresses the shape of the structure shortly after deployment, post-deployment stability is the variation of that deployed shape over the course of the mission. Both factors are important to a mission, but they are tested differently and generally have different root causes. Best practices in testing What is the ideal way of characterizing repeatability? The more directly the test can replicate actual deployment, the better, but with large space structures, it is often impossible to do a direct test without interference from gravity offload systems or the ground environment. Modeling is suitable for wellcharacterized parts, and stochastic modeling techniques can be used for sensitivity analysis and generating a large cohort of trials to spot unusual cases. However, deployment repeatability is inherently a nonlinear phenomenon, which makes modeling difficult without accompanying test data to use as input. In order of preference, the following may be considered for establishing repeatability: This material is based upon work supported under Air Force Contract No. FA8721-05-C-0002 and/or FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. Air Force. 1. Test the flight model from the condition of delivery through full deployment as many times as statistically required  Test to test variability in the gravity compensation is a problem unique to large, ultralightweight spacecraft structures. Depending on the level of repeatability required, characterization of the nonlinearities of the gravity compensation system may be necessary  Ideally, tests should be done in a relevant environment; if not, modeling may still be necessary to determine the significance of changes in e.g. friction If full deployment testing isn’t possible, 2. Test a statistically useful cohort of parts and run a Monte Carlo analysis of a model  In a repeating structure with a series of nominally identical parts (for example, a mast with a series of identical bays), an estimate can be made by combining the tested repeatabilities of the parts (see Error! Reference source not found., Error! Reference source not found.) If parts testing or computer time for a Monte Carlo analysis aren’t possible, 3. Run edge cases of a model Factors in repeatability What contributes to the repeatability of a structure? Random effects that vary unpredictably between deployments, and progressive effects that change predictably from one deployment to the next. Some effects can reasonably be considered random in one project, but not in another. When considering sources of deployment repeatability, it is useful to look at analogous sources of errors in measurement instruments. Examples of measurement instrument error categories are described in Figliola and Beasley [2]. Using measurement instruments as a foundation, deployment repeatability error sources for a single test article can be grouped into 4 categories: Zero shift: A test to test shift in the zero point of the expected response curve. Examples sources of zero shift frictional slip bolted joints within the structure under test or at mounting points in the GSE or material yield or failure during a test. For the purposes of this chapter, zero shift will refer to permanent changes in the structure, while reversible changes will be considered random. Random error: Variation in the response curve throughout all tests. Example sources could include acoustical noise in the testing environment or other environmental effects that are not well understood. Hysteresis error: Changes in the deployed shape based on test cycling. Example sources include material creep or test-to-test variation in the extent of deployment of the structure. Environmental sensitivity error: Variations in the deployed shape due to changes in the environment. Example sources include expansion or contraction of the structure due to changes in temperature, humidity or atmospheric pressure. Many of these more obviously affect postdeployment stability, but some can alter the deployment process itself. [Illustration of concepts to be added] Deployment repeatability can also change from test article to test article, known as unit to unit precision error. Sources of unit to unit precision error include variations in materials, manufacturing procedures and manufacturing environment.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要